2025-05-19 13:47:18.756086 | Job console starting 2025-05-19 13:47:18.767647 | Updating git repos 2025-05-19 13:47:18.844274 | Cloning repos into workspace 2025-05-19 13:47:19.106736 | Restoring repo states 2025-05-19 13:47:19.140808 | Merging changes 2025-05-19 13:47:19.140876 | Checking out repos 2025-05-19 13:47:19.466279 | Preparing playbooks 2025-05-19 13:47:20.088385 | Running Ansible setup 2025-05-19 13:47:24.427011 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-19 13:47:25.152043 | 2025-05-19 13:47:25.152206 | PLAY [Base pre] 2025-05-19 13:47:25.169455 | 2025-05-19 13:47:25.169595 | TASK [Setup log path fact] 2025-05-19 13:47:25.208770 | orchestrator | ok 2025-05-19 13:47:25.228898 | 2025-05-19 13:47:25.229045 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 13:47:25.273368 | orchestrator | ok 2025-05-19 13:47:25.298528 | 2025-05-19 13:47:25.298698 | TASK [emit-job-header : Print job information] 2025-05-19 13:47:25.349868 | # Job Information 2025-05-19 13:47:25.350064 | Ansible Version: 2.16.14 2025-05-19 13:47:25.350100 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-19 13:47:25.350134 | Pipeline: post 2025-05-19 13:47:25.350157 | Executor: 521e9411259a 2025-05-19 13:47:25.350178 | Triggered by: https://github.com/osism/testbed/commit/0a5ec277aeb4d3efc7e329dd9561f81343d620c8 2025-05-19 13:47:25.350201 | Event ID: c3720398-34b7-11f0-9f79-67d037bdc2d2 2025-05-19 13:47:25.358302 | 2025-05-19 13:47:25.358443 | LOOP [emit-job-header : Print node information] 2025-05-19 13:47:25.480364 | orchestrator | ok: 2025-05-19 13:47:25.480561 | orchestrator | # Node Information 2025-05-19 13:47:25.480596 | orchestrator | Inventory Hostname: orchestrator 2025-05-19 13:47:25.480621 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-19 13:47:25.480644 | orchestrator | Username: zuul-testbed04 2025-05-19 13:47:25.480665 | orchestrator | Distro: Debian 12.11 2025-05-19 13:47:25.480688 | orchestrator | Provider: static-testbed 2025-05-19 13:47:25.480708 | orchestrator | Region: 2025-05-19 13:47:25.480729 | orchestrator | Label: testbed-orchestrator 2025-05-19 13:47:25.480749 | orchestrator | Product Name: OpenStack Nova 2025-05-19 13:47:25.480768 | orchestrator | Interface IP: 81.163.193.140 2025-05-19 13:47:25.499656 | 2025-05-19 13:47:25.499789 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-19 13:47:25.960273 | orchestrator -> localhost | changed 2025-05-19 13:47:25.968883 | 2025-05-19 13:47:25.969019 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-19 13:47:27.007519 | orchestrator -> localhost | changed 2025-05-19 13:47:27.022143 | 2025-05-19 13:47:27.022283 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-19 13:47:27.312022 | orchestrator -> localhost | ok 2025-05-19 13:47:27.327544 | 2025-05-19 13:47:27.327741 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-19 13:47:27.380684 | orchestrator | ok 2025-05-19 13:47:27.400945 | orchestrator | included: /var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-19 13:47:27.409446 | 2025-05-19 13:47:27.409571 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-19 13:47:28.470932 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-19 13:47:28.471226 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/14da8de40697410c90def8b74f0720f7_id_rsa 2025-05-19 13:47:28.471277 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/14da8de40697410c90def8b74f0720f7_id_rsa.pub 2025-05-19 13:47:28.471309 | orchestrator -> localhost | The key fingerprint is: 2025-05-19 13:47:28.471341 | orchestrator -> localhost | SHA256:jSRf1l6Fe70+JGl7kTbrIqHCjK/XE3ULh6u3C9x0+hI zuul-build-sshkey 2025-05-19 13:47:28.471368 | orchestrator -> localhost | The key's randomart image is: 2025-05-19 13:47:28.471408 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-19 13:47:28.471435 | orchestrator -> localhost | | ..| 2025-05-19 13:47:28.471461 | orchestrator -> localhost | | . .. | 2025-05-19 13:47:28.471487 | orchestrator -> localhost | | . . o o ...| 2025-05-19 13:47:28.471511 | orchestrator -> localhost | | + = = +. o| 2025-05-19 13:47:28.471535 | orchestrator -> localhost | | S o.*.o.o| 2025-05-19 13:47:28.471567 | orchestrator -> localhost | | ..oEo= B | 2025-05-19 13:47:28.471592 | orchestrator -> localhost | | + .o+o+ * +| 2025-05-19 13:47:28.471616 | orchestrator -> localhost | | . = =.+.o = | 2025-05-19 13:47:28.471641 | orchestrator -> localhost | | .+.. oo=.+..| 2025-05-19 13:47:28.471667 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-19 13:47:28.471741 | orchestrator -> localhost | ok: Runtime: 0:00:00.568690 2025-05-19 13:47:28.480825 | 2025-05-19 13:47:28.480955 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-19 13:47:28.514796 | orchestrator | ok 2025-05-19 13:47:28.534225 | orchestrator | included: /var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-19 13:47:28.550222 | 2025-05-19 13:47:28.550371 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-19 13:47:28.575350 | orchestrator | skipping: Conditional result was False 2025-05-19 13:47:28.591365 | 2025-05-19 13:47:28.591495 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-19 13:47:29.197910 | orchestrator | changed 2025-05-19 13:47:29.206730 | 2025-05-19 13:47:29.206914 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-19 13:47:29.487606 | orchestrator | ok 2025-05-19 13:47:29.494928 | 2025-05-19 13:47:29.495050 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-19 13:47:29.936424 | orchestrator | ok 2025-05-19 13:47:29.943924 | 2025-05-19 13:47:29.944043 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-19 13:47:30.367697 | orchestrator | ok 2025-05-19 13:47:30.376942 | 2025-05-19 13:47:30.377087 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-19 13:47:30.411467 | orchestrator | skipping: Conditional result was False 2025-05-19 13:47:30.421296 | 2025-05-19 13:47:30.421422 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-19 13:47:30.901764 | orchestrator -> localhost | changed 2025-05-19 13:47:30.916722 | 2025-05-19 13:47:30.916876 | TASK [add-build-sshkey : Add back temp key] 2025-05-19 13:47:31.328025 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/14da8de40697410c90def8b74f0720f7_id_rsa (zuul-build-sshkey) 2025-05-19 13:47:31.328568 | orchestrator -> localhost | ok: Runtime: 0:00:00.023547 2025-05-19 13:47:31.343572 | 2025-05-19 13:47:31.343724 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-19 13:47:31.781489 | orchestrator | ok 2025-05-19 13:47:31.794511 | 2025-05-19 13:47:31.794694 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-19 13:47:31.819986 | orchestrator | skipping: Conditional result was False 2025-05-19 13:47:31.878612 | 2025-05-19 13:47:31.878755 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-19 13:47:32.280256 | orchestrator | ok 2025-05-19 13:47:32.293693 | 2025-05-19 13:47:32.293822 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-19 13:47:32.333588 | orchestrator | ok 2025-05-19 13:47:32.341794 | 2025-05-19 13:47:32.341957 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-19 13:47:32.628637 | orchestrator -> localhost | ok 2025-05-19 13:47:32.637342 | 2025-05-19 13:47:32.637454 | TASK [validate-host : Collect information about the host] 2025-05-19 13:47:33.868340 | orchestrator | ok 2025-05-19 13:47:33.885283 | 2025-05-19 13:47:33.885400 | TASK [validate-host : Sanitize hostname] 2025-05-19 13:47:33.961638 | orchestrator | ok 2025-05-19 13:47:33.970785 | 2025-05-19 13:47:33.971030 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-19 13:47:34.546458 | orchestrator -> localhost | changed 2025-05-19 13:47:34.553274 | 2025-05-19 13:47:34.553388 | TASK [validate-host : Collect information about zuul worker] 2025-05-19 13:47:35.048259 | orchestrator | ok 2025-05-19 13:47:35.053759 | 2025-05-19 13:47:35.053890 | TASK [validate-host : Write out all zuul information for each host] 2025-05-19 13:47:35.635554 | orchestrator -> localhost | changed 2025-05-19 13:47:35.662828 | 2025-05-19 13:47:35.663035 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-19 13:47:35.946656 | orchestrator | ok 2025-05-19 13:47:35.953295 | 2025-05-19 13:47:35.953409 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-19 13:48:24.422811 | orchestrator | changed: 2025-05-19 13:48:24.423202 | orchestrator | .d..t...... src/ 2025-05-19 13:48:24.423242 | orchestrator | .d..t...... src/github.com/ 2025-05-19 13:48:24.423268 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-19 13:48:24.423290 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-19 13:48:24.423311 | orchestrator | RedHat.yml 2025-05-19 13:48:24.437760 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-19 13:48:24.437778 | orchestrator | RedHat.yml 2025-05-19 13:48:24.437831 | orchestrator | = 1.53.0"... 2025-05-19 13:48:43.671840 | orchestrator | 13:48:43.671 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-19 13:48:43.753094 | orchestrator | 13:48:43.752 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-19 13:48:45.063865 | orchestrator | 13:48:45.063 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-19 13:48:46.499387 | orchestrator | 13:48:46.499 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-19 13:48:47.447846 | orchestrator | 13:48:47.447 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-19 13:48:48.363901 | orchestrator | 13:48:48.363 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 13:48:49.305665 | orchestrator | 13:48:49.305 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-19 13:48:50.154244 | orchestrator | 13:48:50.153 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 13:48:50.154294 | orchestrator | 13:48:50.153 STDOUT terraform: Providers are signed by their developers. 2025-05-19 13:48:50.154300 | orchestrator | 13:48:50.153 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-19 13:48:50.154305 | orchestrator | 13:48:50.153 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-19 13:48:50.154309 | orchestrator | 13:48:50.153 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-19 13:48:50.154317 | orchestrator | 13:48:50.153 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-19 13:48:50.154323 | orchestrator | 13:48:50.153 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-19 13:48:50.154328 | orchestrator | 13:48:50.153 STDOUT terraform: you run "tofu init" in the future. 2025-05-19 13:48:50.154332 | orchestrator | 13:48:50.153 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-19 13:48:50.154336 | orchestrator | 13:48:50.153 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-19 13:48:50.154340 | orchestrator | 13:48:50.153 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-19 13:48:50.154344 | orchestrator | 13:48:50.153 STDOUT terraform: should now work. 2025-05-19 13:48:50.154348 | orchestrator | 13:48:50.153 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-19 13:48:50.154352 | orchestrator | 13:48:50.154 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-19 13:48:50.154357 | orchestrator | 13:48:50.154 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-19 13:48:50.385062 | orchestrator | 13:48:50.384 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-19 13:48:50.572879 | orchestrator | 13:48:50.572 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-19 13:48:50.572977 | orchestrator | 13:48:50.572 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-19 13:48:50.573123 | orchestrator | 13:48:50.572 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-19 13:48:50.573169 | orchestrator | 13:48:50.573 STDOUT terraform: for this configuration. 2025-05-19 13:48:50.801617 | orchestrator | 13:48:50.801 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-19 13:48:50.922720 | orchestrator | 13:48:50.922 STDOUT terraform: ci.auto.tfvars 2025-05-19 13:48:50.929060 | orchestrator | 13:48:50.928 STDOUT terraform: default_custom.tf 2025-05-19 13:48:51.172333 | orchestrator | 13:48:51.172 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-19 13:48:52.196591 | orchestrator | 13:48:52.196 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-19 13:48:52.693416 | orchestrator | 13:48:52.693 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-19 13:48:52.891173 | orchestrator | 13:48:52.890 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-19 13:48:52.891252 | orchestrator | 13:48:52.891 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-19 13:48:52.891261 | orchestrator | 13:48:52.891 STDOUT terraform:  + create 2025-05-19 13:48:52.891455 | orchestrator | 13:48:52.891 STDOUT terraform:  <= read (data resources) 2025-05-19 13:48:52.891548 | orchestrator | 13:48:52.891 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-19 13:48:52.891579 | orchestrator | 13:48:52.891 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-19 13:48:52.891592 | orchestrator | 13:48:52.891 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 13:48:52.891663 | orchestrator | 13:48:52.891 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-19 13:48:52.891722 | orchestrator | 13:48:52.891 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 13:48:52.891842 | orchestrator | 13:48:52.891 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 13:48:52.891860 | orchestrator | 13:48:52.891 STDOUT terraform:  + file = (known after apply) 2025-05-19 13:48:52.891926 | orchestrator | 13:48:52.891 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.892028 | orchestrator | 13:48:52.891 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.892108 | orchestrator | 13:48:52.892 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 13:48:52.892179 | orchestrator | 13:48:52.892 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 13:48:52.892230 | orchestrator | 13:48:52.892 STDOUT terraform:  + most_recent = true 2025-05-19 13:48:52.892303 | orchestrator | 13:48:52.892 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.892374 | orchestrator | 13:48:52.892 STDOUT terraform:  + protected = (known after apply) 2025-05-19 13:48:52.892445 | orchestrator | 13:48:52.892 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.892516 | orchestrator | 13:48:52.892 STDOUT terraform:  + schema = (known after apply) 2025-05-19 13:48:52.892588 | orchestrator | 13:48:52.892 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 13:48:52.892659 | orchestrator | 13:48:52.892 STDOUT terraform:  + tags = (known after apply) 2025-05-19 13:48:52.892731 | orchestrator | 13:48:52.892 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 13:48:52.892765 | orchestrator | 13:48:52.892 STDOUT terraform:  } 2025-05-19 13:48:52.892893 | orchestrator | 13:48:52.892 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-19 13:48:52.892964 | orchestrator | 13:48:52.892 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 13:48:52.893064 | orchestrator | 13:48:52.892 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-19 13:48:52.893134 | orchestrator | 13:48:52.893 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 13:48:52.893205 | orchestrator | 13:48:52.893 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 13:48:52.893276 | orchestrator | 13:48:52.893 STDOUT terraform:  + file = (known after apply) 2025-05-19 13:48:52.893347 | orchestrator | 13:48:52.893 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.893420 | orchestrator | 13:48:52.893 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.893489 | orchestrator | 13:48:52.893 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 13:48:52.893561 | orchestrator | 13:48:52.893 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 13:48:52.893608 | orchestrator | 13:48:52.893 STDOUT terraform:  + most_recent = true 2025-05-19 13:48:52.893681 | orchestrator | 13:48:52.893 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.893752 | orchestrator | 13:48:52.893 STDOUT terraform:  + protected = (known after apply) 2025-05-19 13:48:52.893819 | orchestrator | 13:48:52.893 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.893890 | orchestrator | 13:48:52.893 STDOUT terraform:  + schema = (known after apply) 2025-05-19 13:48:52.894086 | orchestrator | 13:48:52.893 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 13:48:52.894194 | orchestrator | 13:48:52.894 STDOUT terraform:  + tags = (known after apply) 2025-05-19 13:48:52.894254 | orchestrator | 13:48:52.894 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 13:48:52.894281 | orchestrator | 13:48:52.894 STDOUT terraform:  } 2025-05-19 13:48:52.894346 | orchestrator | 13:48:52.894 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-19 13:48:52.894408 | orchestrator | 13:48:52.894 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-19 13:48:52.894483 | orchestrator | 13:48:52.894 STDOUT terraform:  + content = (known after apply) 2025-05-19 13:48:52.894559 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 13:48:52.894632 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 13:48:52.894703 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 13:48:52.894776 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 13:48:52.894850 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 13:48:52.894926 | orchestrator | 13:48:52.894 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 13:48:52.894996 | orchestrator | 13:48:52.894 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 13:48:52.895049 | orchestrator | 13:48:52.894 STDOUT terraform:  + file_permission = "0644" 2025-05-19 13:48:52.895123 | orchestrator | 13:48:52.895 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-19 13:48:52.895198 | orchestrator | 13:48:52.895 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.895225 | orchestrator | 13:48:52.895 STDOUT terraform:  } 2025-05-19 13:48:52.895349 | orchestrator | 13:48:52.895 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-19 13:48:52.895406 | orchestrator | 13:48:52.895 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-19 13:48:52.895482 | orchestrator | 13:48:52.895 STDOUT terraform:  + content = (known after apply) 2025-05-19 13:48:52.895554 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 13:48:52.895626 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 13:48:52.895697 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 13:48:52.895770 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 13:48:52.895840 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 13:48:52.895912 | orchestrator | 13:48:52.895 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 13:48:52.895977 | orchestrator | 13:48:52.895 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 13:48:52.896029 | orchestrator | 13:48:52.895 STDOUT terraform:  + file_permission = "0644" 2025-05-19 13:48:52.896092 | orchestrator | 13:48:52.896 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-19 13:48:52.896166 | orchestrator | 13:48:52.896 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.896193 | orchestrator | 13:48:52.896 STDOUT terraform:  } 2025-05-19 13:48:52.896242 | orchestrator | 13:48:52.896 STDOUT terraform:  # local_file.inventory will be created 2025-05-19 13:48:52.896294 | orchestrator | 13:48:52.896 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-19 13:48:52.896366 | orchestrator | 13:48:52.896 STDOUT terraform:  + content = (known after apply) 2025-05-19 13:48:52.896435 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 13:48:52.896507 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 13:48:52.896579 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 13:48:52.896654 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 13:48:52.896722 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 13:48:52.896793 | orchestrator | 13:48:52.896 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 13:48:52.896842 | orchestrator | 13:48:52.896 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 13:48:52.896911 | orchestrator | 13:48:52.896 STDOUT terraform:  + file_permission = "0644" 2025-05-19 13:48:52.897019 | orchestrator | 13:48:52.896 STDOUT terraform:  + filename = "inventory.ci" 2025-05-19 13:48:52.897077 | orchestrator | 13:48:52.896 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.897106 | orchestrator | 13:48:52.897 STDOUT terraform:  } 2025-05-19 13:48:52.897168 | orchestrator | 13:48:52.897 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-19 13:48:52.897231 | orchestrator | 13:48:52.897 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-19 13:48:52.897296 | orchestrator | 13:48:52.897 STDOUT terraform:  + content = (sensitive value) 2025-05-19 13:48:52.897367 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 13:48:52.897439 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 13:48:52.897511 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 13:48:52.897584 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 13:48:52.897656 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 13:48:52.897729 | orchestrator | 13:48:52.897 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 13:48:52.897779 | orchestrator | 13:48:52.897 STDOUT terraform:  + directory_permission = "0700" 2025-05-19 13:48:52.897829 | orchestrator | 13:48:52.897 STDOUT terraform:  + file_permission = "0600" 2025-05-19 13:48:52.897890 | orchestrator | 13:48:52.897 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-19 13:48:52.898009 | orchestrator | 13:48:52.897 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.898056 | orchestrator | 13:48:52.898 STDOUT terraform:  } 2025-05-19 13:48:52.898118 | orchestrator | 13:48:52.898 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-19 13:48:52.898180 | orchestrator | 13:48:52.898 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-19 13:48:52.898222 | orchestrator | 13:48:52.898 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.898250 | orchestrator | 13:48:52.898 STDOUT terraform:  } 2025-05-19 13:48:52.898354 | orchestrator | 13:48:52.898 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-19 13:48:52.898452 | orchestrator | 13:48:52.898 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-19 13:48:52.898516 | orchestrator | 13:48:52.898 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.898559 | orchestrator | 13:48:52.898 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.898622 | orchestrator | 13:48:52.898 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.898687 | orchestrator | 13:48:52.898 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.898749 | orchestrator | 13:48:52.898 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.898827 | orchestrator | 13:48:52.898 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-19 13:48:52.898893 | orchestrator | 13:48:52.898 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.898932 | orchestrator | 13:48:52.898 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.899088 | orchestrator | 13:48:52.898 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.899147 | orchestrator | 13:48:52.898 STDOUT terraform:  } 2025-05-19 13:48:52.899168 | orchestrator | 13:48:52.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-19 13:48:52.899186 | orchestrator | 13:48:52.899 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.899261 | orchestrator | 13:48:52.899 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.899300 | orchestrator | 13:48:52.899 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.899364 | orchestrator | 13:48:52.899 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.899427 | orchestrator | 13:48:52.899 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.899491 | orchestrator | 13:48:52.899 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.899572 | orchestrator | 13:48:52.899 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-19 13:48:52.899635 | orchestrator | 13:48:52.899 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.899679 | orchestrator | 13:48:52.899 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.899723 | orchestrator | 13:48:52.899 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.899740 | orchestrator | 13:48:52.899 STDOUT terraform:  } 2025-05-19 13:48:52.899840 | orchestrator | 13:48:52.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-19 13:48:52.899932 | orchestrator | 13:48:52.899 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.900039 | orchestrator | 13:48:52.899 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.900081 | orchestrator | 13:48:52.900 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.900156 | orchestrator | 13:48:52.900 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.900203 | orchestrator | 13:48:52.900 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.900270 | orchestrator | 13:48:52.900 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.900357 | orchestrator | 13:48:52.900 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-19 13:48:52.900420 | orchestrator | 13:48:52.900 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.900469 | orchestrator | 13:48:52.900 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.900485 | orchestrator | 13:48:52.900 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.900500 | orchestrator | 13:48:52.900 STDOUT terraform:  } 2025-05-19 13:48:52.900602 | orchestrator | 13:48:52.900 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-19 13:48:52.900697 | orchestrator | 13:48:52.900 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.900759 | orchestrator | 13:48:52.900 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.900802 | orchestrator | 13:48:52.900 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.900866 | orchestrator | 13:48:52.900 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.900928 | orchestrator | 13:48:52.900 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.901009 | orchestrator | 13:48:52.900 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.901085 | orchestrator | 13:48:52.900 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-19 13:48:52.901148 | orchestrator | 13:48:52.901 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.901188 | orchestrator | 13:48:52.901 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.901229 | orchestrator | 13:48:52.901 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.901245 | orchestrator | 13:48:52.901 STDOUT terraform:  } 2025-05-19 13:48:52.901345 | orchestrator | 13:48:52.901 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-19 13:48:52.901437 | orchestrator | 13:48:52.901 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.901499 | orchestrator | 13:48:52.901 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.901540 | orchestrator | 13:48:52.901 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.901603 | orchestrator | 13:48:52.901 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.901665 | orchestrator | 13:48:52.901 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.901735 | orchestrator | 13:48:52.901 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.901807 | orchestrator | 13:48:52.901 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-19 13:48:52.901875 | orchestrator | 13:48:52.901 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.901895 | orchestrator | 13:48:52.901 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.902046 | orchestrator | 13:48:52.901 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.902067 | orchestrator | 13:48:52.901 STDOUT terraform:  } 2025-05-19 13:48:52.902173 | orchestrator | 13:48:52.901 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-19 13:48:52.902271 | orchestrator | 13:48:52.902 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.902334 | orchestrator | 13:48:52.902 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.902376 | orchestrator | 13:48:52.902 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.902440 | orchestrator | 13:48:52.902 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.902501 | orchestrator | 13:48:52.902 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.902562 | orchestrator | 13:48:52.902 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.902642 | orchestrator | 13:48:52.902 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-19 13:48:52.902704 | orchestrator | 13:48:52.902 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.902746 | orchestrator | 13:48:52.902 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.902788 | orchestrator | 13:48:52.902 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.902804 | orchestrator | 13:48:52.902 STDOUT terraform:  } 2025-05-19 13:48:52.902904 | orchestrator | 13:48:52.902 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-19 13:48:52.903065 | orchestrator | 13:48:52.902 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 13:48:52.903137 | orchestrator | 13:48:52.903 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.903199 | orchestrator | 13:48:52.903 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.903243 | orchestrator | 13:48:52.903 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.903306 | orchestrator | 13:48:52.903 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.903363 | orchestrator | 13:48:52.903 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.903434 | orchestrator | 13:48:52.903 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-19 13:48:52.903489 | orchestrator | 13:48:52.903 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.903524 | orchestrator | 13:48:52.903 STDOUT terraform:  + size = 80 2025-05-19 13:48:52.903561 | orchestrator | 13:48:52.903 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.903577 | orchestrator | 13:48:52.903 STDOUT terraform:  } 2025-05-19 13:48:52.903656 | orchestrator | 13:48:52.903 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-19 13:48:52.903732 | orchestrator | 13:48:52.903 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.903786 | orchestrator | 13:48:52.903 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.903823 | orchestrator | 13:48:52.903 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.903881 | orchestrator | 13:48:52.903 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.903933 | orchestrator | 13:48:52.903 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.904014 | orchestrator | 13:48:52.903 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-19 13:48:52.904065 | orchestrator | 13:48:52.904 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.904095 | orchestrator | 13:48:52.904 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.904131 | orchestrator | 13:48:52.904 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.904147 | orchestrator | 13:48:52.904 STDOUT terraform:  } 2025-05-19 13:48:52.904226 | orchestrator | 13:48:52.904 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-19 13:48:52.904302 | orchestrator | 13:48:52.904 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.904357 | orchestrator | 13:48:52.904 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.904396 | orchestrator | 13:48:52.904 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.904451 | orchestrator | 13:48:52.904 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.904505 | orchestrator | 13:48:52.904 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.904572 | orchestrator | 13:48:52.904 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-19 13:48:52.904625 | orchestrator | 13:48:52.904 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.904662 | orchestrator | 13:48:52.904 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.904705 | orchestrator | 13:48:52.904 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.904722 | orchestrator | 13:48:52.904 STDOUT terraform:  } 2025-05-19 13:48:52.904792 | orchestrator | 13:48:52.904 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-19 13:48:52.904868 | orchestrator | 13:48:52.904 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.904921 | orchestrator | 13:48:52.904 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.905024 | orchestrator | 13:48:52.904 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.905041 | orchestrator | 13:48:52.904 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.905056 | orchestrator | 13:48:52.905 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.905131 | orchestrator | 13:48:52.905 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-19 13:48:52.905188 | orchestrator | 13:48:52.905 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.905220 | orchestrator | 13:48:52.905 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.905257 | orchestrator | 13:48:52.905 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.905273 | orchestrator | 13:48:52.905 STDOUT terraform:  } 2025-05-19 13:48:52.905353 | orchestrator | 13:48:52.905 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-19 13:48:52.905430 | orchestrator | 13:48:52.905 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.905481 | orchestrator | 13:48:52.905 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.905516 | orchestrator | 13:48:52.905 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.905571 | orchestrator | 13:48:52.905 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.905624 | orchestrator | 13:48:52.905 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.905694 | orchestrator | 13:48:52.905 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-19 13:48:52.905750 | orchestrator | 13:48:52.905 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.905784 | orchestrator | 13:48:52.905 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.905821 | orchestrator | 13:48:52.905 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.905874 | orchestrator | 13:48:52.905 STDOUT terraform:  } 2025-05-19 13:48:52.905946 | orchestrator | 13:48:52.905 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-19 13:48:52.906055 | orchestrator | 13:48:52.905 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.906109 | orchestrator | 13:48:52.906 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.906146 | orchestrator | 13:48:52.906 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.906200 | orchestrator | 13:48:52.906 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.906254 | orchestrator | 13:48:52.906 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.906320 | orchestrator | 13:48:52.906 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-19 13:48:52.906374 | orchestrator | 13:48:52.906 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.906429 | orchestrator | 13:48:52.906 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.906462 | orchestrator | 13:48:52.906 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.906476 | orchestrator | 13:48:52.906 STDOUT terraform:  } 2025-05-19 13:48:52.906563 | orchestrator | 13:48:52.906 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-19 13:48:52.906635 | orchestrator | 13:48:52.906 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.906696 | orchestrator | 13:48:52.906 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.906712 | orchestrator | 13:48:52.906 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.906775 | orchestrator | 13:48:52.906 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.906825 | orchestrator | 13:48:52.906 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.906897 | orchestrator | 13:48:52.906 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-19 13:48:52.906971 | orchestrator | 13:48:52.906 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.906989 | orchestrator | 13:48:52.906 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.907024 | orchestrator | 13:48:52.906 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.907040 | orchestrator | 13:48:52.907 STDOUT terraform:  } 2025-05-19 13:48:52.907110 | orchestrator | 13:48:52.907 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-19 13:48:52.907186 | orchestrator | 13:48:52.907 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.907238 | orchestrator | 13:48:52.907 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.907273 | orchestrator | 13:48:52.907 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.907327 | orchestrator | 13:48:52.907 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.907379 | orchestrator | 13:48:52.907 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.907445 | orchestrator | 13:48:52.907 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-19 13:48:52.907499 | orchestrator | 13:48:52.907 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.907533 | orchestrator | 13:48:52.907 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.907570 | orchestrator | 13:48:52.907 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.907585 | orchestrator | 13:48:52.907 STDOUT terraform:  } 2025-05-19 13:48:52.907665 | orchestrator | 13:48:52.907 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-19 13:48:52.907740 | orchestrator | 13:48:52.907 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.907793 | orchestrator | 13:48:52.907 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.907828 | orchestrator | 13:48:52.907 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.907882 | orchestrator | 13:48:52.907 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.907934 | orchestrator | 13:48:52.907 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.908035 | orchestrator | 13:48:52.907 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-19 13:48:52.908086 | orchestrator | 13:48:52.908 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.908122 | orchestrator | 13:48:52.908 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.908157 | orchestrator | 13:48:52.908 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.908171 | orchestrator | 13:48:52.908 STDOUT terraform:  } 2025-05-19 13:48:52.908252 | orchestrator | 13:48:52.908 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-19 13:48:52.908332 | orchestrator | 13:48:52.908 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 13:48:52.908378 | orchestrator | 13:48:52.908 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 13:48:52.908415 | orchestrator | 13:48:52.908 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.908469 | orchestrator | 13:48:52.908 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.908522 | orchestrator | 13:48:52.908 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 13:48:52.908583 | orchestrator | 13:48:52.908 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-19 13:48:52.908635 | orchestrator | 13:48:52.908 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.908668 | orchestrator | 13:48:52.908 STDOUT terraform:  + size = 20 2025-05-19 13:48:52.908702 | orchestrator | 13:48:52.908 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 13:48:52.908725 | orchestrator | 13:48:52.908 STDOUT terraform:  } 2025-05-19 13:48:52.908788 | orchestrator | 13:48:52.908 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-19 13:48:52.908859 | orchestrator | 13:48:52.908 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-19 13:48:52.908916 | orchestrator | 13:48:52.908 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.909023 | orchestrator | 13:48:52.908 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.909041 | orchestrator | 13:48:52.908 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.909095 | orchestrator | 13:48:52.909 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.909135 | orchestrator | 13:48:52.909 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.909168 | orchestrator | 13:48:52.909 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.909225 | orchestrator | 13:48:52.909 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.909278 | orchestrator | 13:48:52.909 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.909328 | orchestrator | 13:48:52.909 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-19 13:48:52.909366 | orchestrator | 13:48:52.909 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.909423 | orchestrator | 13:48:52.909 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.909481 | orchestrator | 13:48:52.909 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.909552 | orchestrator | 13:48:52.909 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.910115 | orchestrator | 13:48:52.909 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.910199 | orchestrator | 13:48:52.909 STDOUT terraform:  + name = "testbed-manager" 2025-05-19 13:48:52.910206 | orchestrator | 13:48:52.909 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.910211 | orchestrator | 13:48:52.909 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.910215 | orchestrator | 13:48:52.909 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.910220 | orchestrator | 13:48:52.909 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.910224 | orchestrator | 13:48:52.909 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.910228 | orchestrator | 13:48:52.909 STDOUT terraform:  + user_data = (known after apply) 2025-05-19 13:48:52.910232 | orchestrator | 13:48:52.909 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.910237 | orchestrator | 13:48:52.909 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.910241 | orchestrator | 13:48:52.909 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.910245 | orchestrator | 13:48:52.909 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.910255 | orchestrator | 13:48:52.910 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.910259 | orchestrator | 13:48:52.910 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.910274 | orchestrator | 13:48:52.910 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.910279 | orchestrator | 13:48:52.910 STDOUT terraform:  } 2025-05-19 13:48:52.910289 | orchestrator | 13:48:52.910 STDOUT terraform:  + network { 2025-05-19 13:48:52.910296 | orchestrator | 13:48:52.910 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.910329 | orchestrator | 13:48:52.910 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.910524 | orchestrator | 13:48:52.910 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.910592 | orchestrator | 13:48:52.910 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.910625 | orchestrator | 13:48:52.910 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.910636 | orchestrator | 13:48:52.910 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.910646 | orchestrator | 13:48:52.910 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.910657 | orchestrator | 13:48:52.910 STDOUT terraform:  } 2025-05-19 13:48:52.910668 | orchestrator | 13:48:52.910 STDOUT terraform:  } 2025-05-19 13:48:52.910682 | orchestrator | 13:48:52.910 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-19 13:48:52.910761 | orchestrator | 13:48:52.910 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.910803 | orchestrator | 13:48:52.910 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.910859 | orchestrator | 13:48:52.910 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.910997 | orchestrator | 13:48:52.910 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.911017 | orchestrator | 13:48:52.910 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.911043 | orchestrator | 13:48:52.910 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.911080 | orchestrator | 13:48:52.911 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.911147 | orchestrator | 13:48:52.911 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.911196 | orchestrator | 13:48:52.911 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.911243 | orchestrator | 13:48:52.911 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.911287 | orchestrator | 13:48:52.911 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.911345 | orchestrator | 13:48:52.911 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.911404 | orchestrator | 13:48:52.911 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.911461 | orchestrator | 13:48:52.911 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.911501 | orchestrator | 13:48:52.911 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.911551 | orchestrator | 13:48:52.911 STDOUT terraform:  + name = "testbed-node-0" 2025-05-19 13:48:52.911590 | orchestrator | 13:48:52.911 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.911649 | orchestrator | 13:48:52.911 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.911705 | orchestrator | 13:48:52.911 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.911744 | orchestrator | 13:48:52.911 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.911800 | orchestrator | 13:48:52.911 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.911889 | orchestrator | 13:48:52.911 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.911898 | orchestrator | 13:48:52.911 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.911941 | orchestrator | 13:48:52.911 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.912022 | orchestrator | 13:48:52.911 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.912070 | orchestrator | 13:48:52.912 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.912116 | orchestrator | 13:48:52.912 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.912164 | orchestrator | 13:48:52.912 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.912226 | orchestrator | 13:48:52.912 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.912245 | orchestrator | 13:48:52.912 STDOUT terraform:  } 2025-05-19 13:48:52.912264 | orchestrator | 13:48:52.912 STDOUT terraform:  + network { 2025-05-19 13:48:52.912294 | orchestrator | 13:48:52.912 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.912343 | orchestrator | 13:48:52.912 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.912394 | orchestrator | 13:48:52.912 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.912445 | orchestrator | 13:48:52.912 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.912500 | orchestrator | 13:48:52.912 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.912540 | orchestrator | 13:48:52.912 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.912584 | orchestrator | 13:48:52.912 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.912602 | orchestrator | 13:48:52.912 STDOUT terraform:  } 2025-05-19 13:48:52.912619 | orchestrator | 13:48:52.912 STDOUT terraform:  } 2025-05-19 13:48:52.912680 | orchestrator | 13:48:52.912 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-19 13:48:52.912739 | orchestrator | 13:48:52.912 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.912789 | orchestrator | 13:48:52.912 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.912838 | orchestrator | 13:48:52.912 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.912888 | orchestrator | 13:48:52.912 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.912939 | orchestrator | 13:48:52.912 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.912989 | orchestrator | 13:48:52.912 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.913019 | orchestrator | 13:48:52.912 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.913069 | orchestrator | 13:48:52.913 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.913118 | orchestrator | 13:48:52.913 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.913159 | orchestrator | 13:48:52.913 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.913193 | orchestrator | 13:48:52.913 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.913246 | orchestrator | 13:48:52.913 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.913295 | orchestrator | 13:48:52.913 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.913344 | orchestrator | 13:48:52.913 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.913379 | orchestrator | 13:48:52.913 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.913423 | orchestrator | 13:48:52.913 STDOUT terraform:  + name = "testbed-node-1" 2025-05-19 13:48:52.913458 | orchestrator | 13:48:52.913 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.913507 | orchestrator | 13:48:52.913 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.913554 | orchestrator | 13:48:52.913 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.913587 | orchestrator | 13:48:52.913 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.913638 | orchestrator | 13:48:52.913 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.913708 | orchestrator | 13:48:52.913 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.913726 | orchestrator | 13:48:52.913 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.913762 | orchestrator | 13:48:52.913 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.913802 | orchestrator | 13:48:52.913 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.913844 | orchestrator | 13:48:52.913 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.913886 | orchestrator | 13:48:52.913 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.913927 | orchestrator | 13:48:52.913 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.913998 | orchestrator | 13:48:52.913 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.914034 | orchestrator | 13:48:52.913 STDOUT terraform:  } 2025-05-19 13:48:52.914042 | orchestrator | 13:48:52.914 STDOUT terraform:  + network { 2025-05-19 13:48:52.914073 | orchestrator | 13:48:52.914 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.914117 | orchestrator | 13:48:52.914 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.914159 | orchestrator | 13:48:52.914 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.914205 | orchestrator | 13:48:52.914 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.914251 | orchestrator | 13:48:52.914 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.914295 | orchestrator | 13:48:52.914 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.914339 | orchestrator | 13:48:52.914 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.914361 | orchestrator | 13:48:52.914 STDOUT terraform:  } 2025-05-19 13:48:52.914382 | orchestrator | 13:48:52.914 STDOUT terraform:  } 2025-05-19 13:48:52.914442 | orchestrator | 13:48:52.914 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-19 13:48:52.914502 | orchestrator | 13:48:52.914 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.914551 | orchestrator | 13:48:52.914 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.914599 | orchestrator | 13:48:52.914 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.914648 | orchestrator | 13:48:52.914 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.914698 | orchestrator | 13:48:52.914 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.914736 | orchestrator | 13:48:52.914 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.914762 | orchestrator | 13:48:52.914 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.914812 | orchestrator | 13:48:52.914 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.914861 | orchestrator | 13:48:52.914 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.914905 | orchestrator | 13:48:52.914 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.914937 | orchestrator | 13:48:52.914 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.915047 | orchestrator | 13:48:52.914 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.915092 | orchestrator | 13:48:52.915 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.915141 | orchestrator | 13:48:52.915 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.915177 | orchestrator | 13:48:52.915 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.915221 | orchestrator | 13:48:52.915 STDOUT terraform:  + name = "testbed-node-2" 2025-05-19 13:48:52.915256 | orchestrator | 13:48:52.915 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.915305 | orchestrator | 13:48:52.915 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.915355 | orchestrator | 13:48:52.915 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.915391 | orchestrator | 13:48:52.915 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.915441 | orchestrator | 13:48:52.915 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.915513 | orchestrator | 13:48:52.915 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.915536 | orchestrator | 13:48:52.915 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.915570 | orchestrator | 13:48:52.915 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.915608 | orchestrator | 13:48:52.915 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.915650 | orchestrator | 13:48:52.915 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.915690 | orchestrator | 13:48:52.915 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.915731 | orchestrator | 13:48:52.915 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.915786 | orchestrator | 13:48:52.915 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.915807 | orchestrator | 13:48:52.915 STDOUT terraform:  } 2025-05-19 13:48:52.915828 | orchestrator | 13:48:52.915 STDOUT terraform:  + network { 2025-05-19 13:48:52.915857 | orchestrator | 13:48:52.915 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.915900 | orchestrator | 13:48:52.915 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.915948 | orchestrator | 13:48:52.915 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.916020 | orchestrator | 13:48:52.915 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.916067 | orchestrator | 13:48:52.916 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.916113 | orchestrator | 13:48:52.916 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.916158 | orchestrator | 13:48:52.916 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.916178 | orchestrator | 13:48:52.916 STDOUT terraform:  } 2025-05-19 13:48:52.916197 | orchestrator | 13:48:52.916 STDOUT terraform:  } 2025-05-19 13:48:52.916335 | orchestrator | 13:48:52.916 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-19 13:48:52.916414 | orchestrator | 13:48:52.916 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.916466 | orchestrator | 13:48:52.916 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.916516 | orchestrator | 13:48:52.916 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.916561 | orchestrator | 13:48:52.916 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.916607 | orchestrator | 13:48:52.916 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.916637 | orchestrator | 13:48:52.916 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.916663 | orchestrator | 13:48:52.916 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.916707 | orchestrator | 13:48:52.916 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.916754 | orchestrator | 13:48:52.916 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.916791 | orchestrator | 13:48:52.916 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.916823 | orchestrator | 13:48:52.916 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.916867 | orchestrator | 13:48:52.916 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.916910 | orchestrator | 13:48:52.916 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.916963 | orchestrator | 13:48:52.916 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.917002 | orchestrator | 13:48:52.916 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.917041 | orchestrator | 13:48:52.916 STDOUT terraform:  + name = "testbed-node-3" 2025-05-19 13:48:52.917073 | orchestrator | 13:48:52.917 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.917118 | orchestrator | 13:48:52.917 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.917195 | orchestrator | 13:48:52.917 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.917225 | orchestrator | 13:48:52.917 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.917271 | orchestrator | 13:48:52.917 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.917334 | orchestrator | 13:48:52.917 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.917354 | orchestrator | 13:48:52.917 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.917382 | orchestrator | 13:48:52.917 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.917417 | orchestrator | 13:48:52.917 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.917455 | orchestrator | 13:48:52.917 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.917492 | orchestrator | 13:48:52.917 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.917531 | orchestrator | 13:48:52.917 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.917579 | orchestrator | 13:48:52.917 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.917585 | orchestrator | 13:48:52.917 STDOUT terraform:  } 2025-05-19 13:48:52.917609 | orchestrator | 13:48:52.917 STDOUT terraform:  + network { 2025-05-19 13:48:52.917635 | orchestrator | 13:48:52.917 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.917675 | orchestrator | 13:48:52.917 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.917715 | orchestrator | 13:48:52.917 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.917756 | orchestrator | 13:48:52.917 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.917796 | orchestrator | 13:48:52.917 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.917836 | orchestrator | 13:48:52.917 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.917875 | orchestrator | 13:48:52.917 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.917882 | orchestrator | 13:48:52.917 STDOUT terraform:  } 2025-05-19 13:48:52.917907 | orchestrator | 13:48:52.917 STDOUT terraform:  } 2025-05-19 13:48:52.918009 | orchestrator | 13:48:52.917 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-19 13:48:52.918083 | orchestrator | 13:48:52.917 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.918128 | orchestrator | 13:48:52.918 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.918171 | orchestrator | 13:48:52.918 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.918219 | orchestrator | 13:48:52.918 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.918259 | orchestrator | 13:48:52.918 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.918288 | orchestrator | 13:48:52.918 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.918312 | orchestrator | 13:48:52.918 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.918355 | orchestrator | 13:48:52.918 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.918399 | orchestrator | 13:48:52.918 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.918434 | orchestrator | 13:48:52.918 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.918463 | orchestrator | 13:48:52.918 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.918505 | orchestrator | 13:48:52.918 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.918547 | orchestrator | 13:48:52.918 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.918591 | orchestrator | 13:48:52.918 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.918623 | orchestrator | 13:48:52.918 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.918659 | orchestrator | 13:48:52.918 STDOUT terraform:  + name = "testbed-node-4" 2025-05-19 13:48:52.918688 | orchestrator | 13:48:52.918 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.918731 | orchestrator | 13:48:52.918 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.918772 | orchestrator | 13:48:52.918 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.918801 | orchestrator | 13:48:52.918 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.918850 | orchestrator | 13:48:52.918 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.918909 | orchestrator | 13:48:52.918 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.918926 | orchestrator | 13:48:52.918 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.918970 | orchestrator | 13:48:52.918 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.919004 | orchestrator | 13:48:52.918 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.919039 | orchestrator | 13:48:52.918 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.919073 | orchestrator | 13:48:52.919 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.919109 | orchestrator | 13:48:52.919 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.919154 | orchestrator | 13:48:52.919 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.919161 | orchestrator | 13:48:52.919 STDOUT terraform:  } 2025-05-19 13:48:52.919184 | orchestrator | 13:48:52.919 STDOUT terraform:  + network { 2025-05-19 13:48:52.919209 | orchestrator | 13:48:52.919 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.919246 | orchestrator | 13:48:52.919 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.919284 | orchestrator | 13:48:52.919 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.919321 | orchestrator | 13:48:52.919 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.919357 | orchestrator | 13:48:52.919 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.919396 | orchestrator | 13:48:52.919 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.919434 | orchestrator | 13:48:52.919 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.919456 | orchestrator | 13:48:52.919 STDOUT terraform:  } 2025-05-19 13:48:52.919462 | orchestrator | 13:48:52.919 STDOUT terraform:  } 2025-05-19 13:48:52.919516 | orchestrator | 13:48:52.919 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-19 13:48:52.919565 | orchestrator | 13:48:52.919 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 13:48:52.919609 | orchestrator | 13:48:52.919 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 13:48:52.919651 | orchestrator | 13:48:52.919 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 13:48:52.919693 | orchestrator | 13:48:52.919 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 13:48:52.919736 | orchestrator | 13:48:52.919 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.919764 | orchestrator | 13:48:52.919 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 13:48:52.919794 | orchestrator | 13:48:52.919 STDOUT terraform:  + config_drive = true 2025-05-19 13:48:52.919831 | orchestrator | 13:48:52.919 STDOUT terraform:  + created = (known after apply) 2025-05-19 13:48:52.919873 | orchestrator | 13:48:52.919 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 13:48:52.919908 | orchestrator | 13:48:52.919 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 13:48:52.919937 | orchestrator | 13:48:52.919 STDOUT terraform:  + force_delete = false 2025-05-19 13:48:52.920002 | orchestrator | 13:48:52.919 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.920067 | orchestrator | 13:48:52.919 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 13:48:52.920112 | orchestrator | 13:48:52.920 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 13:48:52.920143 | orchestrator | 13:48:52.920 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 13:48:52.920181 | orchestrator | 13:48:52.920 STDOUT terraform:  + name = "testbed-node-5" 2025-05-19 13:48:52.920212 | orchestrator | 13:48:52.920 STDOUT terraform:  + power_state = "active" 2025-05-19 13:48:52.920255 | orchestrator | 13:48:52.920 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.920297 | orchestrator | 13:48:52.920 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 13:48:52.920326 | orchestrator | 13:48:52.920 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 13:48:52.920369 | orchestrator | 13:48:52.920 STDOUT terraform:  + updated = (known after apply) 2025-05-19 13:48:52.920429 | orchestrator | 13:48:52.920 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 13:48:52.920439 | orchestrator | 13:48:52.920 STDOUT terraform:  + block_device { 2025-05-19 13:48:52.920475 | orchestrator | 13:48:52.920 STDOUT terraform:  + boot_index = 0 2025-05-19 13:48:52.920508 | orchestrator | 13:48:52.920 STDOUT terraform:  + delete_on_termination = false 2025-05-19 13:48:52.920543 | orchestrator | 13:48:52.920 STDOUT terraform:  + destination_type = "volume" 2025-05-19 13:48:52.920578 | orchestrator | 13:48:52.920 STDOUT terraform:  + multiattach = false 2025-05-19 13:48:52.920614 | orchestrator | 13:48:52.920 STDOUT terraform:  + source_type = "volume" 2025-05-19 13:48:52.920661 | orchestrator | 13:48:52.920 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.920682 | orchestrator | 13:48:52.920 STDOUT terraform:  } 2025-05-19 13:48:52.920688 | orchestrator | 13:48:52.920 STDOUT terraform:  + network { 2025-05-19 13:48:52.920720 | orchestrator | 13:48:52.920 STDOUT terraform:  + access_network = false 2025-05-19 13:48:52.920755 | orchestrator | 13:48:52.920 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 13:48:52.920792 | orchestrator | 13:48:52.920 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 13:48:52.920830 | orchestrator | 13:48:52.920 STDOUT terraform:  + mac = (known after apply) 2025-05-19 13:48:52.920867 | orchestrator | 13:48:52.920 STDOUT terraform:  + name = (known after apply) 2025-05-19 13:48:52.920905 | orchestrator | 13:48:52.920 STDOUT terraform:  + port = (known after apply) 2025-05-19 13:48:52.920942 | orchestrator | 13:48:52.920 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 13:48:52.920965 | orchestrator | 13:48:52.920 STDOUT terraform:  } 2025-05-19 13:48:52.921093 | orchestrator | 13:48:52.920 STDOUT terraform:  } 2025-05-19 13:48:52.921128 | orchestrator | 13:48:52.920 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-19 13:48:52.921146 | orchestrator | 13:48:52.921 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-19 13:48:52.921157 | orchestrator | 13:48:52.921 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-19 13:48:52.921167 | orchestrator | 13:48:52.921 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.921180 | orchestrator | 13:48:52.921 STDOUT terraform:  + name = "testbed" 2025-05-19 13:48:52.921190 | orchestrator | 13:48:52.921 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 13:48:52.921242 | orchestrator | 13:48:52.921 STDOUT terraform:  + public_key = (known after apply) 2025-05-19 13:48:52.921256 | orchestrator | 13:48:52.921 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.921301 | orchestrator | 13:48:52.921 STDOUT terraform:  + user_id = (known after apply) 2025-05-19 13:48:52.921312 | orchestrator | 13:48:52.921 STDOUT terraform:  } 2025-05-19 13:48:52.921365 | orchestrator | 13:48:52.921 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-19 13:48:52.921423 | orchestrator | 13:48:52.921 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.921456 | orchestrator | 13:48:52.921 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.921469 | orchestrator | 13:48:52.921 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.921506 | orchestrator | 13:48:52.921 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.921520 | orchestrator | 13:48:52.921 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.921565 | orchestrator | 13:48:52.921 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.921577 | orchestrator | 13:48:52.921 STDOUT terraform:  } 2025-05-19 13:48:52.921632 | orchestrator | 13:48:52.921 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-19 13:48:52.921684 | orchestrator | 13:48:52.921 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.921699 | orchestrator | 13:48:52.921 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.921743 | orchestrator | 13:48:52.921 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.921757 | orchestrator | 13:48:52.921 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.921801 | orchestrator | 13:48:52.921 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.921816 | orchestrator | 13:48:52.921 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.921829 | orchestrator | 13:48:52.921 STDOUT terraform:  } 2025-05-19 13:48:52.921891 | orchestrator | 13:48:52.921 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-19 13:48:52.921943 | orchestrator | 13:48:52.921 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.921988 | orchestrator | 13:48:52.921 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.922001 | orchestrator | 13:48:52.921 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.922055 | orchestrator | 13:48:52.921 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.922072 | orchestrator | 13:48:52.922 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.922114 | orchestrator | 13:48:52.922 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.922129 | orchestrator | 13:48:52.922 STDOUT terraform:  } 2025-05-19 13:48:52.922179 | orchestrator | 13:48:52.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-19 13:48:52.922232 | orchestrator | 13:48:52.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.922268 | orchestrator | 13:48:52.922 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.922295 | orchestrator | 13:48:52.922 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.922330 | orchestrator | 13:48:52.922 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.922365 | orchestrator | 13:48:52.922 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.922387 | orchestrator | 13:48:52.922 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.922400 | orchestrator | 13:48:52.922 STDOUT terraform:  } 2025-05-19 13:48:52.922453 | orchestrator | 13:48:52.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-19 13:48:52.922506 | orchestrator | 13:48:52.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.922541 | orchestrator | 13:48:52.922 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.922576 | orchestrator | 13:48:52.922 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.922591 | orchestrator | 13:48:52.922 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.922629 | orchestrator | 13:48:52.922 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.922663 | orchestrator | 13:48:52.922 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.922674 | orchestrator | 13:48:52.922 STDOUT terraform:  } 2025-05-19 13:48:52.922725 | orchestrator | 13:48:52.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-19 13:48:52.922778 | orchestrator | 13:48:52.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.922801 | orchestrator | 13:48:52.922 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.922838 | orchestrator | 13:48:52.922 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.922872 | orchestrator | 13:48:52.922 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.922886 | orchestrator | 13:48:52.922 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.922927 | orchestrator | 13:48:52.922 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.922941 | orchestrator | 13:48:52.922 STDOUT terraform:  } 2025-05-19 13:48:52.923096 | orchestrator | 13:48:52.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-19 13:48:52.923160 | orchestrator | 13:48:52.923 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.923184 | orchestrator | 13:48:52.923 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.923218 | orchestrator | 13:48:52.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.923253 | orchestrator | 13:48:52.923 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.923267 | orchestrator | 13:48:52.923 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.923303 | orchestrator | 13:48:52.923 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.923317 | orchestrator | 13:48:52.923 STDOUT terraform:  } 2025-05-19 13:48:52.923364 | orchestrator | 13:48:52.923 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-19 13:48:52.923439 | orchestrator | 13:48:52.923 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.923475 | orchestrator | 13:48:52.923 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.923490 | orchestrator | 13:48:52.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.923526 | orchestrator | 13:48:52.923 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.923547 | orchestrator | 13:48:52.923 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.923620 | orchestrator | 13:48:52.923 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.923633 | orchestrator | 13:48:52.923 STDOUT terraform:  } 2025-05-19 13:48:52.923651 | orchestrator | 13:48:52.923 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-19 13:48:52.923687 | orchestrator | 13:48:52.923 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 13:48:52.923701 | orchestrator | 13:48:52.923 STDOUT terraform:  + device = (known after apply) 2025-05-19 13:48:52.923735 | orchestrator | 13:48:52.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.923750 | orchestrator | 13:48:52.923 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 13:48:52.923790 | orchestrator | 13:48:52.923 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.923814 | orchestrator | 13:48:52.923 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 13:48:52.923828 | orchestrator | 13:48:52.923 STDOUT terraform:  } 2025-05-19 13:48:52.923887 | orchestrator | 13:48:52.923 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-19 13:48:52.923945 | orchestrator | 13:48:52.923 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-19 13:48:52.923984 | orchestrator | 13:48:52.923 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 13:48:52.924016 | orchestrator | 13:48:52.923 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-19 13:48:52.924050 | orchestrator | 13:48:52.924 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.924064 | orchestrator | 13:48:52.924 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 13:48:52.924101 | orchestrator | 13:48:52.924 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.924115 | orchestrator | 13:48:52.924 STDOUT terraform:  } 2025-05-19 13:48:52.924159 | orchestrator | 13:48:52.924 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-19 13:48:52.924208 | orchestrator | 13:48:52.924 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-19 13:48:52.924223 | orchestrator | 13:48:52.924 STDOUT terraform:  + address = (known after apply) 2025-05-19 13:48:52.924256 | orchestrator | 13:48:52.924 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.924270 | orchestrator | 13:48:52.924 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 13:48:52.924303 | orchestrator | 13:48:52.924 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.924318 | orchestrator | 13:48:52.924 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 13:48:52.924351 | orchestrator | 13:48:52.924 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.924365 | orchestrator | 13:48:52.924 STDOUT terraform:  + pool = "public" 2025-05-19 13:48:52.924400 | orchestrator | 13:48:52.924 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 13:48:52.924422 | orchestrator | 13:48:52.924 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.924435 | orchestrator | 13:48:52.924 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.924468 | orchestrator | 13:48:52.924 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.924479 | orchestrator | 13:48:52.924 STDOUT terraform:  } 2025-05-19 13:48:52.924519 | orchestrator | 13:48:52.924 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-19 13:48:52.924563 | orchestrator | 13:48:52.924 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-19 13:48:52.924600 | orchestrator | 13:48:52.924 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.924638 | orchestrator | 13:48:52.924 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.924670 | orchestrator | 13:48:52.924 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 13:48:52.924684 | orchestrator | 13:48:52.924 STDOUT terraform:  + "nova", 2025-05-19 13:48:52.924694 | orchestrator | 13:48:52.924 STDOUT terraform:  ] 2025-05-19 13:48:52.924732 | orchestrator | 13:48:52.924 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 13:48:52.924767 | orchestrator | 13:48:52.924 STDOUT terraform:  + external = (known after apply) 2025-05-19 13:48:52.924804 | orchestrator | 13:48:52.924 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.924841 | orchestrator | 13:48:52.924 STDOUT terraform:  + mtu = (known after apply) 2025-05-19 13:48:52.924881 | orchestrator | 13:48:52.924 STDOUT terraform:  + name = "net-testbed-management" 2025-05-19 13:48:52.924917 | orchestrator | 13:48:52.924 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.924968 | orchestrator | 13:48:52.924 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.925002 | orchestrator | 13:48:52.924 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.925042 | orchestrator | 13:48:52.924 STDOUT terraform:  + shared = (known after apply) 2025-05-19 13:48:52.925079 | orchestrator | 13:48:52.925 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.925122 | orchestrator | 13:48:52.925 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-19 13:48:52.925138 | orchestrator | 13:48:52.925 STDOUT terraform:  + segments (known after apply) 2025-05-19 13:48:52.925148 | orchestrator | 13:48:52.925 STDOUT terraform:  } 2025-05-19 13:48:52.925194 | orchestrator | 13:48:52.925 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-19 13:48:52.925241 | orchestrator | 13:48:52.925 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-19 13:48:52.925278 | orchestrator | 13:48:52.925 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.925315 | orchestrator | 13:48:52.925 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.925350 | orchestrator | 13:48:52.925 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.925390 | orchestrator | 13:48:52.925 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.925412 | orchestrator | 13:48:52.925 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.925458 | orchestrator | 13:48:52.925 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.925494 | orchestrator | 13:48:52.925 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.925531 | orchestrator | 13:48:52.925 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.925569 | orchestrator | 13:48:52.925 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.925606 | orchestrator | 13:48:52.925 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.925644 | orchestrator | 13:48:52.925 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.925683 | orchestrator | 13:48:52.925 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.925719 | orchestrator | 13:48:52.925 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.925755 | orchestrator | 13:48:52.925 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.925791 | orchestrator | 13:48:52.925 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.925828 | orchestrator | 13:48:52.925 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.925842 | orchestrator | 13:48:52.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.925865 | orchestrator | 13:48:52.925 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.925904 | orchestrator | 13:48:52.925 STDOUT terraform:  } 2025-05-19 13:48:52.925919 | orchestrator | 13:48:52.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.926039 | orchestrator | 13:48:52.925 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.926055 | orchestrator | 13:48:52.925 STDOUT terraform:  } 2025-05-19 13:48:52.926070 | orchestrator | 13:48:52.925 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.926083 | orchestrator | 13:48:52.925 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.926093 | orchestrator | 13:48:52.926 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-19 13:48:52.926106 | orchestrator | 13:48:52.926 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.926116 | orchestrator | 13:48:52.926 STDOUT terraform:  } 2025-05-19 13:48:52.926129 | orchestrator | 13:48:52.926 STDOUT terraform:  } 2025-05-19 13:48:52.926171 | orchestrator | 13:48:52.926 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-19 13:48:52.926218 | orchestrator | 13:48:52.926 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.926253 | orchestrator | 13:48:52.926 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.926292 | orchestrator | 13:48:52.926 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.926328 | orchestrator | 13:48:52.926 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.926365 | orchestrator | 13:48:52.926 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.926387 | orchestrator | 13:48:52.926 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.926434 | orchestrator | 13:48:52.926 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.926471 | orchestrator | 13:48:52.926 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.926508 | orchestrator | 13:48:52.926 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.926546 | orchestrator | 13:48:52.926 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.926584 | orchestrator | 13:48:52.926 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.926621 | orchestrator | 13:48:52.926 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.926657 | orchestrator | 13:48:52.926 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.926693 | orchestrator | 13:48:52.926 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.926730 | orchestrator | 13:48:52.926 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.926765 | orchestrator | 13:48:52.926 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.926802 | orchestrator | 13:48:52.926 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.926817 | orchestrator | 13:48:52.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.926849 | orchestrator | 13:48:52.926 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.926863 | orchestrator | 13:48:52.926 STDOUT terraform:  } 2025-05-19 13:48:52.926876 | orchestrator | 13:48:52.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.926909 | orchestrator | 13:48:52.926 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.926920 | orchestrator | 13:48:52.926 STDOUT terraform:  } 2025-05-19 13:48:52.926933 | orchestrator | 13:48:52.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.927020 | orchestrator | 13:48:52.926 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.927034 | orchestrator | 13:48:52.926 STDOUT terraform:  } 2025-05-19 13:48:52.927044 | orchestrator | 13:48:52.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.927057 | orchestrator | 13:48:52.927 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.927067 | orchestrator | 13:48:52.927 STDOUT terraform:  } 2025-05-19 13:48:52.927079 | orchestrator | 13:48:52.927 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.927092 | orchestrator | 13:48:52.927 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.927106 | orchestrator | 13:48:52.927 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-19 13:48:52.927152 | orchestrator | 13:48:52.927 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.927164 | orchestrator | 13:48:52.927 STDOUT terraform:  } 2025-05-19 13:48:52.927177 | orchestrator | 13:48:52.927 STDOUT terraform:  } 2025-05-19 13:48:52.927212 | orchestrator | 13:48:52.927 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-19 13:48:52.927258 | orchestrator | 13:48:52.927 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.927303 | orchestrator | 13:48:52.927 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.927318 | orchestrator | 13:48:52.927 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.927363 | orchestrator | 13:48:52.927 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.927399 | orchestrator | 13:48:52.927 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.927445 | orchestrator | 13:48:52.927 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.927459 | orchestrator | 13:48:52.927 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.927574 | orchestrator | 13:48:52.927 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.927592 | orchestrator | 13:48:52.927 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.927601 | orchestrator | 13:48:52.927 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.927605 | orchestrator | 13:48:52.927 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.927641 | orchestrator | 13:48:52.927 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.927676 | orchestrator | 13:48:52.927 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.927715 | orchestrator | 13:48:52.927 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.927750 | orchestrator | 13:48:52.927 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.927786 | orchestrator | 13:48:52.927 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.927822 | orchestrator | 13:48:52.927 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.927840 | orchestrator | 13:48:52.927 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.927868 | orchestrator | 13:48:52.927 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.927875 | orchestrator | 13:48:52.927 STDOUT terraform:  } 2025-05-19 13:48:52.927899 | orchestrator | 13:48:52.927 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.927928 | orchestrator | 13:48:52.927 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.927935 | orchestrator | 13:48:52.927 STDOUT terraform:  } 2025-05-19 13:48:52.927975 | orchestrator | 13:48:52.927 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.928015 | orchestrator | 13:48:52.927 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.928021 | orchestrator | 13:48:52.928 STDOUT terraform:  } 2025-05-19 13:48:52.928038 | orchestrator | 13:48:52.928 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.928076 | orchestrator | 13:48:52.928 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.928082 | orchestrator | 13:48:52.928 STDOUT terraform:  } 2025-05-19 13:48:52.928110 | orchestrator | 13:48:52.928 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.928125 | orchestrator | 13:48:52.928 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.928148 | orchestrator | 13:48:52.928 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-19 13:48:52.928179 | orchestrator | 13:48:52.928 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.928185 | orchestrator | 13:48:52.928 STDOUT terraform:  } 2025-05-19 13:48:52.928201 | orchestrator | 13:48:52.928 STDOUT terraform:  } 2025-05-19 13:48:52.928249 | orchestrator | 13:48:52.928 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-19 13:48:52.928293 | orchestrator | 13:48:52.928 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.928328 | orchestrator | 13:48:52.928 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.928366 | orchestrator | 13:48:52.928 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.928400 | orchestrator | 13:48:52.928 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.928438 | orchestrator | 13:48:52.928 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.928474 | orchestrator | 13:48:52.928 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.928511 | orchestrator | 13:48:52.928 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.928550 | orchestrator | 13:48:52.928 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.928589 | orchestrator | 13:48:52.928 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.928630 | orchestrator | 13:48:52.928 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.928687 | orchestrator | 13:48:52.928 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.928734 | orchestrator | 13:48:52.928 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.928773 | orchestrator | 13:48:52.928 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.928814 | orchestrator | 13:48:52.928 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.928869 | orchestrator | 13:48:52.928 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.928911 | orchestrator | 13:48:52.928 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.928961 | orchestrator | 13:48:52.928 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.928992 | orchestrator | 13:48:52.928 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.929026 | orchestrator | 13:48:52.928 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.929032 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929061 | orchestrator | 13:48:52.929 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.929094 | orchestrator | 13:48:52.929 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.929100 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929127 | orchestrator | 13:48:52.929 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.929159 | orchestrator | 13:48:52.929 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.929165 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929194 | orchestrator | 13:48:52.929 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.929227 | orchestrator | 13:48:52.929 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.929233 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929264 | orchestrator | 13:48:52.929 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.929271 | orchestrator | 13:48:52.929 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.929303 | orchestrator | 13:48:52.929 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-19 13:48:52.929336 | orchestrator | 13:48:52.929 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.929343 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929358 | orchestrator | 13:48:52.929 STDOUT terraform:  } 2025-05-19 13:48:52.929409 | orchestrator | 13:48:52.929 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-19 13:48:52.929459 | orchestrator | 13:48:52.929 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.929502 | orchestrator | 13:48:52.929 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.929539 | orchestrator | 13:48:52.929 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.929578 | orchestrator | 13:48:52.929 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.929620 | orchestrator | 13:48:52.929 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.929659 | orchestrator | 13:48:52.929 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.929700 | orchestrator | 13:48:52.929 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.929742 | orchestrator | 13:48:52.929 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.929781 | orchestrator | 13:48:52.929 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.929823 | orchestrator | 13:48:52.929 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.929863 | orchestrator | 13:48:52.929 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.929903 | orchestrator | 13:48:52.929 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.929942 | orchestrator | 13:48:52.929 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.930074 | orchestrator | 13:48:52.929 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.930105 | orchestrator | 13:48:52.929 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.930121 | orchestrator | 13:48:52.930 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.930130 | orchestrator | 13:48:52.930 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.930150 | orchestrator | 13:48:52.930 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.930193 | orchestrator | 13:48:52.930 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.930204 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930215 | orchestrator | 13:48:52.930 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.930251 | orchestrator | 13:48:52.930 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.930261 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930271 | orchestrator | 13:48:52.930 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.930300 | orchestrator | 13:48:52.930 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.930309 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930320 | orchestrator | 13:48:52.930 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.930360 | orchestrator | 13:48:52.930 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.930372 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930384 | orchestrator | 13:48:52.930 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.930420 | orchestrator | 13:48:52.930 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.930433 | orchestrator | 13:48:52.930 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-19 13:48:52.930469 | orchestrator | 13:48:52.930 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.930479 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930490 | orchestrator | 13:48:52.930 STDOUT terraform:  } 2025-05-19 13:48:52.930544 | orchestrator | 13:48:52.930 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-19 13:48:52.930595 | orchestrator | 13:48:52.930 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.930635 | orchestrator | 13:48:52.930 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.930676 | orchestrator | 13:48:52.930 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.930718 | orchestrator | 13:48:52.930 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.930757 | orchestrator | 13:48:52.930 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.930798 | orchestrator | 13:48:52.930 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.930830 | orchestrator | 13:48:52.930 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.930874 | orchestrator | 13:48:52.930 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.930927 | orchestrator | 13:48:52.930 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.930990 | orchestrator | 13:48:52.930 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.931034 | orchestrator | 13:48:52.930 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.931074 | orchestrator | 13:48:52.931 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.931102 | orchestrator | 13:48:52.931 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.931151 | orchestrator | 13:48:52.931 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.931184 | orchestrator | 13:48:52.931 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.931226 | orchestrator | 13:48:52.931 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.931259 | orchestrator | 13:48:52.931 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.931271 | orchestrator | 13:48:52.931 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.931312 | orchestrator | 13:48:52.931 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.931322 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931334 | orchestrator | 13:48:52.931 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.931373 | orchestrator | 13:48:52.931 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.931383 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931395 | orchestrator | 13:48:52.931 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.931442 | orchestrator | 13:48:52.931 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.931455 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931464 | orchestrator | 13:48:52.931 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.931497 | orchestrator | 13:48:52.931 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.931507 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931548 | orchestrator | 13:48:52.931 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.931559 | orchestrator | 13:48:52.931 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.931571 | orchestrator | 13:48:52.931 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-19 13:48:52.931618 | orchestrator | 13:48:52.931 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.931629 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931640 | orchestrator | 13:48:52.931 STDOUT terraform:  } 2025-05-19 13:48:52.931691 | orchestrator | 13:48:52.931 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-19 13:48:52.936646 | orchestrator | 13:48:52.931 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 13:48:52.936685 | orchestrator | 13:48:52.931 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.936693 | orchestrator | 13:48:52.931 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 13:48:52.936700 | orchestrator | 13:48:52.931 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 13:48:52.936707 | orchestrator | 13:48:52.931 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.936714 | orchestrator | 13:48:52.931 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 13:48:52.936720 | orchestrator | 13:48:52.931 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 13:48:52.936747 | orchestrator | 13:48:52.931 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 13:48:52.936754 | orchestrator | 13:48:52.931 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 13:48:52.936760 | orchestrator | 13:48:52.932 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.936767 | orchestrator | 13:48:52.932 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 13:48:52.936774 | orchestrator | 13:48:52.932 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.936781 | orchestrator | 13:48:52.932 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 13:48:52.936787 | orchestrator | 13:48:52.932 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 13:48:52.936794 | orchestrator | 13:48:52.932 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.936801 | orchestrator | 13:48:52.932 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 13:48:52.936807 | orchestrator | 13:48:52.932 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.936814 | orchestrator | 13:48:52.932 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.936821 | orchestrator | 13:48:52.932 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 13:48:52.936828 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936835 | orchestrator | 13:48:52.932 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.936842 | orchestrator | 13:48:52.932 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 13:48:52.936849 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936855 | orchestrator | 13:48:52.932 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.936862 | orchestrator | 13:48:52.932 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 13:48:52.936869 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936876 | orchestrator | 13:48:52.932 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 13:48:52.936882 | orchestrator | 13:48:52.932 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 13:48:52.936889 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936896 | orchestrator | 13:48:52.932 STDOUT terraform:  + binding (known after apply) 2025-05-19 13:48:52.936903 | orchestrator | 13:48:52.932 STDOUT terraform:  + fixed_ip { 2025-05-19 13:48:52.936909 | orchestrator | 13:48:52.932 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-19 13:48:52.936916 | orchestrator | 13:48:52.932 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.936923 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936930 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.936936 | orchestrator | 13:48:52.932 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-19 13:48:52.936943 | orchestrator | 13:48:52.932 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-19 13:48:52.936968 | orchestrator | 13:48:52.932 STDOUT terraform:  + force_destroy = false 2025-05-19 13:48:52.936997 | orchestrator | 13:48:52.932 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937005 | orchestrator | 13:48:52.932 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 13:48:52.937012 | orchestrator | 13:48:52.932 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937018 | orchestrator | 13:48:52.932 STDOUT terraform:  + router_id = (known after apply) 2025-05-19 13:48:52.937025 | orchestrator | 13:48:52.932 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 13:48:52.937032 | orchestrator | 13:48:52.932 STDOUT terraform:  } 2025-05-19 13:48:52.937038 | orchestrator | 13:48:52.932 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-19 13:48:52.937045 | orchestrator | 13:48:52.932 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-19 13:48:52.937081 | orchestrator | 13:48:52.932 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 13:48:52.937089 | orchestrator | 13:48:52.933 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.937095 | orchestrator | 13:48:52.933 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 13:48:52.937102 | orchestrator | 13:48:52.933 STDOUT terraform:  + "nova", 2025-05-19 13:48:52.937109 | orchestrator | 13:48:52.933 STDOUT terraform:  ] 2025-05-19 13:48:52.937116 | orchestrator | 13:48:52.933 STDOUT terraform:  + distributed = (known after apply) 2025-05-19 13:48:52.937122 | orchestrator | 13:48:52.933 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-19 13:48:52.937129 | orchestrator | 13:48:52.933 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-19 13:48:52.937136 | orchestrator | 13:48:52.933 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937143 | orchestrator | 13:48:52.933 STDOUT terraform:  + name = "testbed" 2025-05-19 13:48:52.937149 | orchestrator | 13:48:52.933 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937156 | orchestrator | 13:48:52.933 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937163 | orchestrator | 13:48:52.933 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-19 13:48:52.937169 | orchestrator | 13:48:52.933 STDOUT terraform:  } 2025-05-19 13:48:52.937177 | orchestrator | 13:48:52.933 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-19 13:48:52.937185 | orchestrator | 13:48:52.933 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-19 13:48:52.937191 | orchestrator | 13:48:52.933 STDOUT terraform:  + description = "ssh" 2025-05-19 13:48:52.937198 | orchestrator | 13:48:52.933 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937205 | orchestrator | 13:48:52.933 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937211 | orchestrator | 13:48:52.933 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937218 | orchestrator | 13:48:52.933 STDOUT terraform:  + port_range_max = 22 2025-05-19 13:48:52.937225 | orchestrator | 13:48:52.933 STDOUT terraform:  + port_range_min = 22 2025-05-19 13:48:52.937236 | orchestrator | 13:48:52.933 STDOUT terraform:  + protocol = "tcp" 2025-05-19 13:48:52.937243 | orchestrator | 13:48:52.933 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937250 | orchestrator | 13:48:52.933 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937257 | orchestrator | 13:48:52.933 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937264 | orchestrator | 13:48:52.933 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937270 | orchestrator | 13:48:52.933 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937279 | orchestrator | 13:48:52.933 STDOUT terraform:  } 2025-05-19 13:48:52.937286 | orchestrator | 13:48:52.933 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-19 13:48:52.937298 | orchestrator | 13:48:52.934 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-19 13:48:52.937305 | orchestrator | 13:48:52.934 STDOUT terraform:  + description = "wireguard" 2025-05-19 13:48:52.937312 | orchestrator | 13:48:52.934 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937319 | orchestrator | 13:48:52.934 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937326 | orchestrator | 13:48:52.934 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937332 | orchestrator | 13:48:52.934 STDOUT terraform:  + port_range_max = 51820 2025-05-19 13:48:52.937339 | orchestrator | 13:48:52.934 STDOUT terraform:  + port_range_min = 51820 2025-05-19 13:48:52.937346 | orchestrator | 13:48:52.934 STDOUT terraform:  + protocol = "udp" 2025-05-19 13:48:52.937352 | orchestrator | 13:48:52.934 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937359 | orchestrator | 13:48:52.934 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937365 | orchestrator | 13:48:52.934 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937372 | orchestrator | 13:48:52.934 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937379 | orchestrator | 13:48:52.934 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937385 | orchestrator | 13:48:52.934 STDOUT terraform:  } 2025-05-19 13:48:52.937392 | orchestrator | 13:48:52.934 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-19 13:48:52.937399 | orchestrator | 13:48:52.934 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-19 13:48:52.937405 | orchestrator | 13:48:52.934 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937412 | orchestrator | 13:48:52.934 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937419 | orchestrator | 13:48:52.934 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937426 | orchestrator | 13:48:52.934 STDOUT terraform:  + protocol = "tcp" 2025-05-19 13:48:52.937432 | orchestrator | 13:48:52.934 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937443 | orchestrator | 13:48:52.934 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937450 | orchestrator | 13:48:52.934 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 13:48:52.937457 | orchestrator | 13:48:52.934 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937463 | orchestrator | 13:48:52.934 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937470 | orchestrator | 13:48:52.934 STDOUT terraform:  } 2025-05-19 13:48:52.937477 | orchestrator | 13:48:52.934 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-19 13:48:52.937484 | orchestrator | 13:48:52.934 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-19 13:48:52.937490 | orchestrator | 13:48:52.935 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937497 | orchestrator | 13:48:52.935 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937504 | orchestrator | 13:48:52.935 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937515 | orchestrator | 13:48:52.935 STDOUT terraform:  + protocol = "udp" 2025-05-19 13:48:52.937521 | orchestrator | 13:48:52.935 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937528 | orchestrator | 13:48:52.935 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937535 | orchestrator | 13:48:52.935 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 13:48:52.937541 | orchestrator | 13:48:52.935 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937548 | orchestrator | 13:48:52.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937560 | orchestrator | 13:48:52.935 STDOUT terraform:  } 2025-05-19 13:48:52.937567 | orchestrator | 13:48:52.935 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-19 13:48:52.937574 | orchestrator | 13:48:52.935 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-19 13:48:52.937580 | orchestrator | 13:48:52.935 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937587 | orchestrator | 13:48:52.935 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937594 | orchestrator | 13:48:52.935 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937601 | orchestrator | 13:48:52.935 STDOUT terraform:  + protocol = "icmp" 2025-05-19 13:48:52.937610 | orchestrator | 13:48:52.935 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937617 | orchestrator | 13:48:52.935 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937624 | orchestrator | 13:48:52.935 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937631 | orchestrator | 13:48:52.935 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937637 | orchestrator | 13:48:52.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937644 | orchestrator | 13:48:52.935 STDOUT terraform:  } 2025-05-19 13:48:52.937651 | orchestrator | 13:48:52.935 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-19 13:48:52.937666 | orchestrator | 13:48:52.935 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-19 13:48:52.937673 | orchestrator | 13:48:52.935 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937680 | orchestrator | 13:48:52.935 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937687 | orchestrator | 13:48:52.935 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937693 | orchestrator | 13:48:52.935 STDOUT terraform:  + protocol = "tcp" 2025-05-19 13:48:52.937700 | orchestrator | 13:48:52.935 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937707 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937713 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937720 | orchestrator | 13:48:52.936 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937726 | orchestrator | 13:48:52.936 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937733 | orchestrator | 13:48:52.936 STDOUT terraform:  } 2025-05-19 13:48:52.937740 | orchestrator | 13:48:52.936 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-19 13:48:52.937746 | orchestrator | 13:48:52.936 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-19 13:48:52.937753 | orchestrator | 13:48:52.936 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937760 | orchestrator | 13:48:52.936 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937767 | orchestrator | 13:48:52.936 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937773 | orchestrator | 13:48:52.936 STDOUT terraform:  + protocol = "udp" 2025-05-19 13:48:52.937780 | orchestrator | 13:48:52.936 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937787 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937793 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937800 | orchestrator | 13:48:52.936 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937807 | orchestrator | 13:48:52.936 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937813 | orchestrator | 13:48:52.936 STDOUT terraform:  } 2025-05-19 13:48:52.937825 | orchestrator | 13:48:52.936 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-19 13:48:52.937832 | orchestrator | 13:48:52.936 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-19 13:48:52.937839 | orchestrator | 13:48:52.936 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937846 | orchestrator | 13:48:52.936 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937853 | orchestrator | 13:48:52.936 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937859 | orchestrator | 13:48:52.936 STDOUT terraform:  + protocol = "icmp" 2025-05-19 13:48:52.937871 | orchestrator | 13:48:52.936 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937880 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937887 | orchestrator | 13:48:52.936 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937894 | orchestrator | 13:48:52.936 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.937901 | orchestrator | 13:48:52.936 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.937907 | orchestrator | 13:48:52.936 STDOUT terraform:  } 2025-05-19 13:48:52.937914 | orchestrator | 13:48:52.937 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-19 13:48:52.937921 | orchestrator | 13:48:52.937 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-19 13:48:52.937928 | orchestrator | 13:48:52.937 STDOUT terraform:  + description = "vrrp" 2025-05-19 13:48:52.937934 | orchestrator | 13:48:52.937 STDOUT terraform:  + direction = "ingress" 2025-05-19 13:48:52.937941 | orchestrator | 13:48:52.937 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 13:48:52.937948 | orchestrator | 13:48:52.937 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.937970 | orchestrator | 13:48:52.937 STDOUT terraform:  + protocol = "112" 2025-05-19 13:48:52.937977 | orchestrator | 13:48:52.937 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.937984 | orchestrator | 13:48:52.937 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 13:48:52.937990 | orchestrator | 13:48:52.937 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 13:48:52.937997 | orchestrator | 13:48:52.937 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 13:48:52.938004 | orchestrator | 13:48:52.937 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.938010 | orchestrator | 13:48:52.937 STDOUT terraform:  } 2025-05-19 13:48:52.938034 | orchestrator | 13:48:52.937 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-19 13:48:52.938042 | orchestrator | 13:48:52.937 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-19 13:48:52.938049 | orchestrator | 13:48:52.937 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.938056 | orchestrator | 13:48:52.937 STDOUT terraform:  + description = "management security group" 2025-05-19 13:48:52.938063 | orchestrator | 13:48:52.937 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.938069 | orchestrator | 13:48:52.937 STDOUT terraform:  + name = "testbed-management" 2025-05-19 13:48:52.938076 | orchestrator | 13:48:52.937 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.938083 | orchestrator | 13:48:52.937 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 13:48:52.938089 | orchestrator | 13:48:52.937 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.938096 | orchestrator | 13:48:52.937 STDOUT terraform:  } 2025-05-19 13:48:52.938112 | orchestrator | 13:48:52.937 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-19 13:48:52.938119 | orchestrator | 13:48:52.937 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-19 13:48:52.938126 | orchestrator | 13:48:52.937 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.938132 | orchestrator | 13:48:52.937 STDOUT terraform:  + description = "node security group" 2025-05-19 13:48:52.938139 | orchestrator | 13:48:52.938 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.938146 | orchestrator | 13:48:52.938 STDOUT terraform:  + name = "testbed-node" 2025-05-19 13:48:52.938155 | orchestrator | 13:48:52.938 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.938178 | orchestrator | 13:48:52.938 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 13:48:52.938222 | orchestrator | 13:48:52.938 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.938233 | orchestrator | 13:48:52.938 STDOUT terraform:  } 2025-05-19 13:48:52.938301 | orchestrator | 13:48:52.938 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-19 13:48:52.938361 | orchestrator | 13:48:52.938 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-19 13:48:52.938407 | orchestrator | 13:48:52.938 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 13:48:52.938451 | orchestrator | 13:48:52.938 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-19 13:48:52.938477 | orchestrator | 13:48:52.938 STDOUT terraform:  + dns_nameservers = [ 2025-05-19 13:48:52.938487 | orchestrator | 13:48:52.938 STDOUT terraform:  + "8.8.8.8", 2025-05-19 13:48:52.938512 | orchestrator | 13:48:52.938 STDOUT terraform:  + "9.9.9.9", 2025-05-19 13:48:52.938522 | orchestrator | 13:48:52.938 STDOUT terraform:  ] 2025-05-19 13:48:52.938555 | orchestrator | 13:48:52.938 STDOUT terraform:  + enable_dhcp = true 2025-05-19 13:48:52.938599 | orchestrator | 13:48:52.938 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-19 13:48:52.938644 | orchestrator | 13:48:52.938 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.938670 | orchestrator | 13:48:52.938 STDOUT terraform:  + ip_version = 4 2025-05-19 13:48:52.938712 | orchestrator | 13:48:52.938 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-19 13:48:52.938758 | orchestrator | 13:48:52.938 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-19 13:48:52.938810 | orchestrator | 13:48:52.938 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-19 13:48:52.938854 | orchestrator | 13:48:52.938 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 13:48:52.938880 | orchestrator | 13:48:52.938 STDOUT terraform:  + no_gateway = false 2025-05-19 13:48:52.938923 | orchestrator | 13:48:52.938 STDOUT terraform:  + region = (known after apply) 2025-05-19 13:48:52.939006 | orchestrator | 13:48:52.938 STDOUT terraform:  + service_types = (known after apply) 2025-05-19 13:48:52.939018 | orchestrator | 13:48:52.938 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 13:48:52.939059 | orchestrator | 13:48:52.939 STDOUT terraform:  + allocation_pool { 2025-05-19 13:48:52.939069 | orchestrator | 13:48:52.939 STDOUT terraform:  + end = "192.168.31.250" 2025-05-19 13:48:52.939113 | orchestrator | 13:48:52.939 STDOUT terraform:  + start = "192.168.31.200" 2025-05-19 13:48:52.939124 | orchestrator | 13:48:52.939 STDOUT terraform:  } 2025-05-19 13:48:52.939133 | orchestrator | 13:48:52.939 STDOUT terraform:  } 2025-05-19 13:48:52.939176 | orchestrator | 13:48:52.939 STDOUT terraform:  # terraform_data.image will be created 2025-05-19 13:48:52.939211 | orchestrator | 13:48:52.939 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-19 13:48:52.939245 | orchestrator | 13:48:52.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.939271 | orchestrator | 13:48:52.939 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 13:48:52.939304 | orchestrator | 13:48:52.939 STDOUT terraform:  + output = (known after apply) 2025-05-19 13:48:52.939314 | orchestrator | 13:48:52.939 STDOUT terraform:  } 2025-05-19 13:48:52.939360 | orchestrator | 13:48:52.939 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-19 13:48:52.939400 | orchestrator | 13:48:52.939 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-19 13:48:52.939435 | orchestrator | 13:48:52.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 13:48:52.939459 | orchestrator | 13:48:52.939 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 13:48:52.939492 | orchestrator | 13:48:52.939 STDOUT terraform:  + output = (known after apply) 2025-05-19 13:48:52.939501 | orchestrator | 13:48:52.939 STDOUT terraform:  } 2025-05-19 13:48:52.939548 | orchestrator | 13:48:52.939 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-19 13:48:52.939607 | orchestrator | 13:48:52.939 STDOUT terraform: Changes to Outputs: 2025-05-19 13:48:52.939650 | orchestrator | 13:48:52.939 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-19 13:48:52.939692 | orchestrator | 13:48:52.939 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 13:48:53.151996 | orchestrator | 13:48:53.151 STDOUT terraform: terraform_data.image: Creating... 2025-05-19 13:48:53.152088 | orchestrator | 13:48:53.151 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=693c6eb4-f7db-ba92-7334-09be1cddab8c] 2025-05-19 13:48:53.153429 | orchestrator | 13:48:53.153 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-19 13:48:53.154421 | orchestrator | 13:48:53.154 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=38d9d945-7c66-6f12-ff8e-80e64304ead5] 2025-05-19 13:48:53.170049 | orchestrator | 13:48:53.169 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-19 13:48:53.170457 | orchestrator | 13:48:53.170 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-19 13:48:53.179576 | orchestrator | 13:48:53.179 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-19 13:48:53.180002 | orchestrator | 13:48:53.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-19 13:48:53.180531 | orchestrator | 13:48:53.180 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-19 13:48:53.182114 | orchestrator | 13:48:53.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-19 13:48:53.185225 | orchestrator | 13:48:53.183 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-19 13:48:53.185302 | orchestrator | 13:48:53.183 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-19 13:48:53.185314 | orchestrator | 13:48:53.184 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-19 13:48:53.189074 | orchestrator | 13:48:53.188 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-19 13:48:53.620770 | orchestrator | 13:48:53.620 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 13:48:53.625542 | orchestrator | 13:48:53.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-19 13:48:53.626904 | orchestrator | 13:48:53.626 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 13:48:53.633691 | orchestrator | 13:48:53.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-19 13:48:53.672729 | orchestrator | 13:48:53.672 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-19 13:48:53.680272 | orchestrator | 13:48:53.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-19 13:48:59.120796 | orchestrator | 13:48:59.120 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=bf5e4a9f-ae8d-4889-8ab6-ebbfd6188ae4] 2025-05-19 13:48:59.133214 | orchestrator | 13:48:59.132 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-19 13:49:03.182381 | orchestrator | 13:49:03.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-19 13:49:03.182498 | orchestrator | 13:49:03.182 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-19 13:49:03.184260 | orchestrator | 13:49:03.184 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-19 13:49:03.185404 | orchestrator | 13:49:03.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-19 13:49:03.185543 | orchestrator | 13:49:03.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-19 13:49:03.189905 | orchestrator | 13:49:03.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-19 13:49:03.626866 | orchestrator | 13:49:03.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-19 13:49:03.634670 | orchestrator | 13:49:03.634 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-19 13:49:03.682268 | orchestrator | 13:49:03.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-19 13:49:03.745325 | orchestrator | 13:49:03.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=b9a454d9-5190-46d7-bf1d-412c3cdef809] 2025-05-19 13:49:03.755921 | orchestrator | 13:49:03.755 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-19 13:49:03.766550 | orchestrator | 13:49:03.766 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=680c5e0d-c4c7-4132-acba-9735c42c1af0] 2025-05-19 13:49:03.772859 | orchestrator | 13:49:03.772 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-19 13:49:03.783272 | orchestrator | 13:49:03.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=b41d3d7b-c0e8-42b0-b403-509e3ccc1be2] 2025-05-19 13:49:03.794602 | orchestrator | 13:49:03.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-19 13:49:03.798501 | orchestrator | 13:49:03.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=5a001b31-cf12-4664-aa8d-ed0bc0514538] 2025-05-19 13:49:03.798614 | orchestrator | 13:49:03.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=b351c90f-8a81-4fe1-9713-dc72db3449cb] 2025-05-19 13:49:03.803182 | orchestrator | 13:49:03.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-19 13:49:03.804998 | orchestrator | 13:49:03.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-19 13:49:03.808745 | orchestrator | 13:49:03.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=ce4b2895-5caf-48e7-8bbd-df151d11c738] 2025-05-19 13:49:03.818668 | orchestrator | 13:49:03.818 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-19 13:49:03.825825 | orchestrator | 13:49:03.825 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=145f582d3b37ad506e63472430b863463ce35aae] 2025-05-19 13:49:03.836472 | orchestrator | 13:49:03.836 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-19 13:49:03.842002 | orchestrator | 13:49:03.841 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=20528131ed867c9754e043d9984ff76331606909] 2025-05-19 13:49:03.847136 | orchestrator | 13:49:03.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-19 13:49:03.865908 | orchestrator | 13:49:03.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=2a99b222-9040-43f8-85f0-4cedeb957b6a] 2025-05-19 13:49:03.873163 | orchestrator | 13:49:03.872 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=a9ef89b3-6e14-4065-9e7c-f9800ecdb834] 2025-05-19 13:49:03.875037 | orchestrator | 13:49:03.874 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-19 13:49:03.880771 | orchestrator | 13:49:03.880 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=d1a8e6bf-71cd-4139-b86c-b09c993f7964] 2025-05-19 13:49:09.136409 | orchestrator | 13:49:09.135 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 13:49:09.440876 | orchestrator | 13:49:09.440 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=4ca82e82-a9c1-4c75-99ce-866663de7213] 2025-05-19 13:49:10.245348 | orchestrator | 13:49:10.245 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=5b981db7-3371-4808-b567-91ad86ddeb31] 2025-05-19 13:49:10.250613 | orchestrator | 13:49:10.250 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-19 13:49:13.757735 | orchestrator | 13:49:13.757 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-19 13:49:13.773983 | orchestrator | 13:49:13.773 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-19 13:49:13.795297 | orchestrator | 13:49:13.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-19 13:49:13.804632 | orchestrator | 13:49:13.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 13:49:13.805897 | orchestrator | 13:49:13.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-19 13:49:13.848742 | orchestrator | 13:49:13.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-19 13:49:14.111877 | orchestrator | 13:49:14.111 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=69160ba1-4fd3-4019-98a7-b22975faa0b8] 2025-05-19 13:49:14.168644 | orchestrator | 13:49:14.168 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=78133c64-849c-40c3-990a-e64897cf2484] 2025-05-19 13:49:14.190753 | orchestrator | 13:49:14.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=cb6c5de0-1b22-4c77-a0bd-6caa2d18e501] 2025-05-19 13:49:14.232602 | orchestrator | 13:49:14.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=8a851c84-3902-4186-83ed-138a79cd637e] 2025-05-19 13:49:14.264210 | orchestrator | 13:49:14.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=99167c27-3ae4-4936-833c-d0be439dac7f] 2025-05-19 13:49:14.283559 | orchestrator | 13:49:14.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=8da0273e-10d5-4ffc-9c46-b04f159e35a4] 2025-05-19 13:49:17.535894 | orchestrator | 13:49:17.535 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=167e6f92-1a60-4469-9e0e-2091354ac864] 2025-05-19 13:49:17.549611 | orchestrator | 13:49:17.549 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-19 13:49:17.551862 | orchestrator | 13:49:17.551 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-19 13:49:17.551915 | orchestrator | 13:49:17.551 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-19 13:49:17.732633 | orchestrator | 13:49:17.732 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=03ff5f08-1d16-4835-b9d5-967b2b53e0a3] 2025-05-19 13:49:17.736661 | orchestrator | 13:49:17.736 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f3133196-5dc4-4df6-a3cc-bd8de01da785] 2025-05-19 13:49:17.741422 | orchestrator | 13:49:17.741 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-19 13:49:17.741501 | orchestrator | 13:49:17.741 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-19 13:49:17.741646 | orchestrator | 13:49:17.741 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-19 13:49:17.741818 | orchestrator | 13:49:17.741 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-19 13:49:17.749285 | orchestrator | 13:49:17.749 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-19 13:49:17.751246 | orchestrator | 13:49:17.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-19 13:49:17.758158 | orchestrator | 13:49:17.758 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-19 13:49:17.758205 | orchestrator | 13:49:17.758 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-19 13:49:17.758235 | orchestrator | 13:49:17.758 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-19 13:49:18.442321 | orchestrator | 13:49:18.441 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=dee2ccf1-2f0c-413c-9996-8f7bffd34682] 2025-05-19 13:49:18.444076 | orchestrator | 13:49:18.443 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=93686d30-4193-4c0e-a708-89f0f374c7c5] 2025-05-19 13:49:18.459336 | orchestrator | 13:49:18.459 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-19 13:49:18.459529 | orchestrator | 13:49:18.459 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-19 13:49:18.597048 | orchestrator | 13:49:18.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=a30f6742-cef0-4ec9-9ba4-1bacefe4e45d] 2025-05-19 13:49:18.598194 | orchestrator | 13:49:18.597 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=de30cc92-d030-45a2-8a22-40c40af47151] 2025-05-19 13:49:18.611104 | orchestrator | 13:49:18.610 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-19 13:49:18.616815 | orchestrator | 13:49:18.616 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-19 13:49:18.736422 | orchestrator | 13:49:18.735 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=c5ec6c11-23b9-4ce7-88f9-bcd083c7ac36] 2025-05-19 13:49:18.758579 | orchestrator | 13:49:18.758 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-19 13:49:18.903205 | orchestrator | 13:49:18.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=eb91ff6e-b7d6-49e2-92c0-fc7cdc6dfea0] 2025-05-19 13:49:18.909738 | orchestrator | 13:49:18.909 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-19 13:49:19.075306 | orchestrator | 13:49:19.074 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=29edffa6-43c0-4b6f-8cfa-5a18a10cb64c] 2025-05-19 13:49:19.086266 | orchestrator | 13:49:19.085 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-19 13:49:19.157142 | orchestrator | 13:49:19.156 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=cb261346-cfeb-40b4-afae-298948b1ad62] 2025-05-19 13:49:19.451088 | orchestrator | 13:49:19.450 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=21b786e7-5f13-4e77-bf2c-771c975e6f85] 2025-05-19 13:49:23.330863 | orchestrator | 13:49:23.330 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=01872746-5a4f-4407-addb-23474dd33988] 2025-05-19 13:49:23.335266 | orchestrator | 13:49:23.335 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=ad9fa078-cdd6-4582-8298-17ad61d0b902] 2025-05-19 13:49:24.053777 | orchestrator | 13:49:24.053 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=e744b9a2-6cfa-4166-aa98-4763ceabcfad] 2025-05-19 13:49:24.176173 | orchestrator | 13:49:24.175 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=53d30ad2-59f0-4f69-9c13-b1f67c563a2c] 2025-05-19 13:49:24.216758 | orchestrator | 13:49:24.216 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=3e07441b-1cc8-444b-89e0-062071b66fdc] 2025-05-19 13:49:24.341148 | orchestrator | 13:49:24.340 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=3052f314-d86b-455f-87e3-98ce9550dd0f] 2025-05-19 13:49:24.461681 | orchestrator | 13:49:24.461 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=1151c3b6-ab7a-493b-9f3c-e2718e43b3d7] 2025-05-19 13:49:25.056284 | orchestrator | 13:49:25.055 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=96aa4394-9d84-4e40-a7f1-6eaa16610b34] 2025-05-19 13:49:25.088322 | orchestrator | 13:49:25.088 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-19 13:49:25.091333 | orchestrator | 13:49:25.091 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-19 13:49:25.092477 | orchestrator | 13:49:25.092 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-19 13:49:25.093357 | orchestrator | 13:49:25.093 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-19 13:49:25.094789 | orchestrator | 13:49:25.094 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-19 13:49:25.103042 | orchestrator | 13:49:25.102 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-19 13:49:25.105021 | orchestrator | 13:49:25.104 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-19 13:49:31.770956 | orchestrator | 13:49:31.770 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=09a34cdb-f671-4407-8546-af6d052d64d4] 2025-05-19 13:49:31.782170 | orchestrator | 13:49:31.781 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-19 13:49:31.783656 | orchestrator | 13:49:31.783 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-19 13:49:31.790686 | orchestrator | 13:49:31.790 STDOUT terraform: local_file.inventory: Creating... 2025-05-19 13:49:31.791213 | orchestrator | 13:49:31.791 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0a9396540859d05320a02084fb802c3ce77daafa] 2025-05-19 13:49:31.796767 | orchestrator | 13:49:31.796 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=74a4d86f806ee0874f8261291565c63ffc7c5ec7] 2025-05-19 13:49:32.490575 | orchestrator | 13:49:32.490 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=09a34cdb-f671-4407-8546-af6d052d64d4] 2025-05-19 13:49:35.092697 | orchestrator | 13:49:35.092 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-19 13:49:35.098757 | orchestrator | 13:49:35.098 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-19 13:49:35.098870 | orchestrator | 13:49:35.098 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-19 13:49:35.099023 | orchestrator | 13:49:35.098 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-19 13:49:35.109062 | orchestrator | 13:49:35.108 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-19 13:49:35.109116 | orchestrator | 13:49:35.108 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-19 13:49:45.092997 | orchestrator | 13:49:45.092 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-19 13:49:45.099163 | orchestrator | 13:49:45.098 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-19 13:49:45.099305 | orchestrator | 13:49:45.099 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-19 13:49:45.099563 | orchestrator | 13:49:45.099 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-19 13:49:45.109494 | orchestrator | 13:49:45.109 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-19 13:49:45.109667 | orchestrator | 13:49:45.109 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-19 13:49:55.094611 | orchestrator | 13:49:55.094 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-19 13:49:55.099670 | orchestrator | 13:49:55.099 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-19 13:49:55.099954 | orchestrator | 13:49:55.099 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-19 13:49:55.100135 | orchestrator | 13:49:55.099 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-19 13:49:55.110114 | orchestrator | 13:49:55.109 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-19 13:49:55.110267 | orchestrator | 13:49:55.110 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-19 13:49:55.482450 | orchestrator | 13:49:55.481 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=09a184d5-f586-4b34-a987-bda86ef8e31e] 2025-05-19 13:49:55.562391 | orchestrator | 13:49:55.562 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=b4f254b2-a16b-46f0-8d15-83ac32c8648f] 2025-05-19 13:49:55.684635 | orchestrator | 13:49:55.684 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=2d6f9657-8f49-4ab8-a4ec-0772800e68d1] 2025-05-19 13:49:55.685239 | orchestrator | 13:49:55.684 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=b59533d3-50aa-428f-9123-050a35047921] 2025-05-19 13:49:55.707928 | orchestrator | 13:49:55.707 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=11f65c9b-d1d8-4e64-86e6-f0b645f2f43a] 2025-05-19 13:49:55.834454 | orchestrator | 13:49:55.834 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=295a137e-f703-4912-9c00-8e90ff91da6b] 2025-05-19 13:49:55.858523 | orchestrator | 13:49:55.858 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-19 13:49:55.861865 | orchestrator | 13:49:55.861 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-19 13:49:55.863670 | orchestrator | 13:49:55.863 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-19 13:49:55.866265 | orchestrator | 13:49:55.866 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5925307131963764400] 2025-05-19 13:49:55.867487 | orchestrator | 13:49:55.867 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-19 13:49:55.869033 | orchestrator | 13:49:55.868 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-19 13:49:55.869085 | orchestrator | 13:49:55.868 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-19 13:49:55.874683 | orchestrator | 13:49:55.874 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-19 13:49:55.884569 | orchestrator | 13:49:55.884 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-19 13:49:55.887683 | orchestrator | 13:49:55.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-19 13:49:55.893361 | orchestrator | 13:49:55.893 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-19 13:49:55.899742 | orchestrator | 13:49:55.899 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-19 13:50:01.165677 | orchestrator | 13:50:01.165 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=11f65c9b-d1d8-4e64-86e6-f0b645f2f43a/2a99b222-9040-43f8-85f0-4cedeb957b6a] 2025-05-19 13:50:01.185140 | orchestrator | 13:50:01.184 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=b59533d3-50aa-428f-9123-050a35047921/b9a454d9-5190-46d7-bf1d-412c3cdef809] 2025-05-19 13:50:01.193539 | orchestrator | 13:50:01.193 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=295a137e-f703-4912-9c00-8e90ff91da6b/b351c90f-8a81-4fe1-9713-dc72db3449cb] 2025-05-19 13:50:01.236707 | orchestrator | 13:50:01.236 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=11f65c9b-d1d8-4e64-86e6-f0b645f2f43a/d1a8e6bf-71cd-4139-b86c-b09c993f7964] 2025-05-19 13:50:01.243756 | orchestrator | 13:50:01.243 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=b59533d3-50aa-428f-9123-050a35047921/b41d3d7b-c0e8-42b0-b403-509e3ccc1be2] 2025-05-19 13:50:01.261371 | orchestrator | 13:50:01.260 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=295a137e-f703-4912-9c00-8e90ff91da6b/ce4b2895-5caf-48e7-8bbd-df151d11c738] 2025-05-19 13:50:01.277647 | orchestrator | 13:50:01.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=11f65c9b-d1d8-4e64-86e6-f0b645f2f43a/5a001b31-cf12-4664-aa8d-ed0bc0514538] 2025-05-19 13:50:01.284084 | orchestrator | 13:50:01.283 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=b59533d3-50aa-428f-9123-050a35047921/680c5e0d-c4c7-4132-acba-9735c42c1af0] 2025-05-19 13:50:01.303760 | orchestrator | 13:50:01.303 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=295a137e-f703-4912-9c00-8e90ff91da6b/a9ef89b3-6e14-4065-9e7c-f9800ecdb834] 2025-05-19 13:50:05.897156 | orchestrator | 13:50:05.896 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-19 13:50:15.898163 | orchestrator | 13:50:15.897 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-19 13:50:16.182308 | orchestrator | 13:50:16.181 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a3908ce8-93f0-4231-8c5c-99c9ec82c51e] 2025-05-19 13:50:16.205904 | orchestrator | 13:50:16.205 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-19 13:50:16.205974 | orchestrator | 13:50:16.205 STDOUT terraform: Outputs: 2025-05-19 13:50:16.206013 | orchestrator | 13:50:16.205 STDOUT terraform: manager_address = 2025-05-19 13:50:16.206095 | orchestrator | 13:50:16.205 STDOUT terraform: private_key = 2025-05-19 13:50:16.531804 | orchestrator | ok: Runtime: 0:01:33.347502 2025-05-19 13:50:16.570230 | 2025-05-19 13:50:16.570380 | TASK [Create infrastructure (stable)] 2025-05-19 13:50:17.106076 | orchestrator | skipping: Conditional result was False 2025-05-19 13:50:17.115144 | 2025-05-19 13:50:17.115296 | TASK [Fetch manager address] 2025-05-19 13:50:17.583411 | orchestrator | ok 2025-05-19 13:50:17.595895 | 2025-05-19 13:50:17.596130 | TASK [Set manager_host address] 2025-05-19 13:50:17.676289 | orchestrator | ok 2025-05-19 13:50:17.687567 | 2025-05-19 13:50:17.687749 | LOOP [Update ansible collections] 2025-05-19 13:50:29.501512 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 13:50:29.502056 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 13:50:29.502319 | orchestrator | Starting galaxy collection install process 2025-05-19 13:50:29.502371 | orchestrator | Process install dependency map 2025-05-19 13:50:29.502409 | orchestrator | Starting collection install process 2025-05-19 13:50:29.502443 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-19 13:50:29.502484 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-19 13:50:29.502606 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-19 13:50:29.502699 | orchestrator | ok: Item: commons Runtime: 0:00:11.497117 2025-05-19 13:50:33.655268 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 13:50:33.655450 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 13:50:33.655519 | orchestrator | Starting galaxy collection install process 2025-05-19 13:50:33.655559 | orchestrator | Process install dependency map 2025-05-19 13:50:33.655593 | orchestrator | Starting collection install process 2025-05-19 13:50:33.655625 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-19 13:50:33.655656 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-19 13:50:33.655687 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-19 13:50:33.655736 | orchestrator | ok: Item: services Runtime: 0:00:03.916121 2025-05-19 13:50:33.678472 | 2025-05-19 13:50:33.678632 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 13:50:44.197288 | orchestrator | ok 2025-05-19 13:50:44.208399 | 2025-05-19 13:50:44.208525 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 13:51:44.262895 | orchestrator | ok 2025-05-19 13:51:44.272803 | 2025-05-19 13:51:44.272922 | TASK [Fetch manager ssh hostkey] 2025-05-19 13:51:45.850439 | orchestrator | Output suppressed because no_log was given 2025-05-19 13:51:45.867230 | 2025-05-19 13:51:45.867445 | TASK [Get ssh keypair from terraform environment] 2025-05-19 13:51:46.407742 | orchestrator | ok: Runtime: 0:00:00.005117 2025-05-19 13:51:46.425258 | 2025-05-19 13:51:46.425428 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 13:51:46.474366 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-19 13:51:46.483932 | 2025-05-19 13:51:46.484078 | TASK [Run manager part 0] 2025-05-19 13:51:49.513513 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 13:51:49.979876 | orchestrator | 2025-05-19 13:51:49.979934 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-19 13:51:49.979945 | orchestrator | 2025-05-19 13:51:49.979987 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-19 13:51:51.719390 | orchestrator | ok: [testbed-manager] 2025-05-19 13:51:51.719444 | orchestrator | 2025-05-19 13:51:51.719466 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 13:51:51.719477 | orchestrator | 2025-05-19 13:51:51.719488 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 13:51:54.232817 | orchestrator | ok: [testbed-manager] 2025-05-19 13:51:54.232871 | orchestrator | 2025-05-19 13:51:54.232879 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 13:51:54.883743 | orchestrator | ok: [testbed-manager] 2025-05-19 13:51:54.883827 | orchestrator | 2025-05-19 13:51:54.883846 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 13:51:54.947945 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:54.947991 | orchestrator | 2025-05-19 13:51:54.948001 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-19 13:51:54.978853 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:54.978882 | orchestrator | 2025-05-19 13:51:54.978888 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 13:51:55.007662 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:55.007711 | orchestrator | 2025-05-19 13:51:55.007720 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 13:51:55.034833 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:55.034886 | orchestrator | 2025-05-19 13:51:55.034897 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 13:51:55.068702 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:55.068750 | orchestrator | 2025-05-19 13:51:55.068757 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-19 13:51:55.097027 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:55.097083 | orchestrator | 2025-05-19 13:51:55.097091 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-19 13:51:55.121712 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:51:55.121745 | orchestrator | 2025-05-19 13:51:55.121751 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-19 13:51:55.930864 | orchestrator | changed: [testbed-manager] 2025-05-19 13:51:55.930930 | orchestrator | 2025-05-19 13:51:55.930937 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-19 13:54:42.678224 | orchestrator | changed: [testbed-manager] 2025-05-19 13:54:42.678299 | orchestrator | 2025-05-19 13:54:42.678317 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 13:55:55.366339 | orchestrator | changed: [testbed-manager] 2025-05-19 13:55:55.366388 | orchestrator | 2025-05-19 13:55:55.366397 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 13:56:15.708815 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:15.708860 | orchestrator | 2025-05-19 13:56:15.708870 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 13:56:24.052086 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:24.052132 | orchestrator | 2025-05-19 13:56:24.052141 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 13:56:24.094977 | orchestrator | ok: [testbed-manager] 2025-05-19 13:56:24.095013 | orchestrator | 2025-05-19 13:56:24.095020 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-19 13:56:24.869599 | orchestrator | ok: [testbed-manager] 2025-05-19 13:56:24.869766 | orchestrator | 2025-05-19 13:56:24.869788 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-19 13:56:25.611745 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:25.611833 | orchestrator | 2025-05-19 13:56:25.611848 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-19 13:56:32.098240 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:32.098338 | orchestrator | 2025-05-19 13:56:32.098381 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-19 13:56:37.887394 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:37.887489 | orchestrator | 2025-05-19 13:56:37.887509 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-19 13:56:40.469121 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:40.469243 | orchestrator | 2025-05-19 13:56:40.469261 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-19 13:56:42.186756 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:42.186840 | orchestrator | 2025-05-19 13:56:42.186856 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-19 13:56:43.334615 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 13:56:43.334708 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 13:56:43.334723 | orchestrator | 2025-05-19 13:56:43.334736 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-19 13:56:43.375875 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 13:56:43.375948 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 13:56:43.375962 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 13:56:43.375974 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 13:56:48.139935 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 13:56:48.139979 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 13:56:48.139986 | orchestrator | 2025-05-19 13:56:48.139993 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-19 13:56:48.694326 | orchestrator | changed: [testbed-manager] 2025-05-19 13:56:48.694439 | orchestrator | 2025-05-19 13:56:48.694455 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-19 13:59:08.327606 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-19 13:59:08.327714 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-19 13:59:08.327732 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-19 13:59:08.327745 | orchestrator | 2025-05-19 13:59:08.327757 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-19 13:59:10.606794 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-19 13:59:10.606878 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-19 13:59:10.606894 | orchestrator | 2025-05-19 13:59:10.606907 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-19 13:59:10.606920 | orchestrator | 2025-05-19 13:59:10.606931 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 13:59:11.994271 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:11.994401 | orchestrator | 2025-05-19 13:59:11.994433 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 13:59:12.040291 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:12.040350 | orchestrator | 2025-05-19 13:59:12.040357 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 13:59:12.108833 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:12.108890 | orchestrator | 2025-05-19 13:59:12.108896 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 13:59:12.871194 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:12.871379 | orchestrator | 2025-05-19 13:59:12.871396 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 13:59:13.591907 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:13.592674 | orchestrator | 2025-05-19 13:59:13.592696 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 13:59:15.019420 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-19 13:59:15.019488 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-19 13:59:15.019502 | orchestrator | 2025-05-19 13:59:15.019528 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 13:59:16.405824 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:16.405909 | orchestrator | 2025-05-19 13:59:16.405926 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 13:59:18.110950 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 13:59:18.110992 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-19 13:59:18.111000 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-19 13:59:18.111007 | orchestrator | 2025-05-19 13:59:18.111014 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 13:59:18.676101 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:18.676968 | orchestrator | 2025-05-19 13:59:18.676990 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 13:59:18.745706 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:18.745798 | orchestrator | 2025-05-19 13:59:18.745813 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 13:59:19.572557 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 13:59:19.572622 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:19.572637 | orchestrator | 2025-05-19 13:59:19.572650 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 13:59:19.608752 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:19.608827 | orchestrator | 2025-05-19 13:59:19.608842 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 13:59:19.643672 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:19.643735 | orchestrator | 2025-05-19 13:59:19.643750 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 13:59:19.678408 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:19.678467 | orchestrator | 2025-05-19 13:59:19.678480 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 13:59:19.729333 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:19.729397 | orchestrator | 2025-05-19 13:59:19.729412 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 13:59:20.439026 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:20.439084 | orchestrator | 2025-05-19 13:59:20.439098 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 13:59:20.439110 | orchestrator | 2025-05-19 13:59:20.439123 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 13:59:21.819730 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:21.819798 | orchestrator | 2025-05-19 13:59:21.819813 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-19 13:59:22.759676 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:22.759757 | orchestrator | 2025-05-19 13:59:22.759773 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 13:59:22.759786 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-19 13:59:22.759798 | orchestrator | 2025-05-19 13:59:23.289110 | orchestrator | ok: Runtime: 0:07:36.068782 2025-05-19 13:59:23.308449 | 2025-05-19 13:59:23.308600 | TASK [Point out that the log in on the manager is now possible] 2025-05-19 13:59:23.340798 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-19 13:59:23.347707 | 2025-05-19 13:59:23.347814 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 13:59:23.390057 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-19 13:59:23.398361 | 2025-05-19 13:59:23.398473 | TASK [Run manager part 1 + 2] 2025-05-19 13:59:24.237619 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 13:59:24.289152 | orchestrator | 2025-05-19 13:59:24.289261 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-19 13:59:24.289283 | orchestrator | 2025-05-19 13:59:24.289314 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 13:59:27.176896 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:27.176974 | orchestrator | 2025-05-19 13:59:27.177026 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 13:59:27.214271 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:27.214325 | orchestrator | 2025-05-19 13:59:27.214342 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 13:59:27.251224 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:27.251288 | orchestrator | 2025-05-19 13:59:27.251305 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 13:59:27.286462 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:27.286546 | orchestrator | 2025-05-19 13:59:27.286565 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 13:59:27.352894 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:27.352956 | orchestrator | 2025-05-19 13:59:27.352972 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 13:59:27.407533 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:27.407591 | orchestrator | 2025-05-19 13:59:27.407607 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 13:59:27.446280 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-19 13:59:27.446338 | orchestrator | 2025-05-19 13:59:27.446351 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 13:59:28.156806 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:28.156869 | orchestrator | 2025-05-19 13:59:28.156884 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 13:59:28.203035 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:28.203124 | orchestrator | 2025-05-19 13:59:28.203150 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 13:59:29.555749 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:29.555824 | orchestrator | 2025-05-19 13:59:29.555842 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 13:59:30.127476 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:30.127540 | orchestrator | 2025-05-19 13:59:30.127556 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 13:59:31.245649 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:31.245682 | orchestrator | 2025-05-19 13:59:31.245690 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 13:59:44.258809 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:44.258853 | orchestrator | 2025-05-19 13:59:44.258860 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 13:59:44.953137 | orchestrator | ok: [testbed-manager] 2025-05-19 13:59:44.953201 | orchestrator | 2025-05-19 13:59:44.953208 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 13:59:45.002603 | orchestrator | skipping: [testbed-manager] 2025-05-19 13:59:45.002631 | orchestrator | 2025-05-19 13:59:45.002636 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-19 13:59:45.972672 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:45.972710 | orchestrator | 2025-05-19 13:59:45.972716 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-19 13:59:46.903043 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:46.903085 | orchestrator | 2025-05-19 13:59:46.903094 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-19 13:59:47.481725 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:47.481766 | orchestrator | 2025-05-19 13:59:47.481776 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-19 13:59:47.523892 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 13:59:47.524003 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 13:59:47.524021 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 13:59:47.524038 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 13:59:50.481121 | orchestrator | changed: [testbed-manager] 2025-05-19 13:59:50.481224 | orchestrator | 2025-05-19 13:59:50.481241 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-19 13:59:59.173096 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-19 13:59:59.173153 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-19 13:59:59.173164 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-19 13:59:59.173171 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-19 13:59:59.173181 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-19 13:59:59.173187 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-19 13:59:59.173194 | orchestrator | 2025-05-19 13:59:59.173200 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-19 14:00:00.108618 | orchestrator | changed: [testbed-manager] 2025-05-19 14:00:00.108945 | orchestrator | 2025-05-19 14:00:00.108968 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-19 14:00:00.149461 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:00:00.149531 | orchestrator | 2025-05-19 14:00:00.149546 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-19 14:00:03.024157 | orchestrator | changed: [testbed-manager] 2025-05-19 14:00:03.024237 | orchestrator | 2025-05-19 14:00:03.024253 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-19 14:00:03.059612 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:00:03.059687 | orchestrator | 2025-05-19 14:00:03.059701 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-19 14:01:35.235843 | orchestrator | changed: [testbed-manager] 2025-05-19 14:01:35.235909 | orchestrator | 2025-05-19 14:01:35.235952 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 14:01:36.327320 | orchestrator | ok: [testbed-manager] 2025-05-19 14:01:36.327414 | orchestrator | 2025-05-19 14:01:36.327432 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:01:36.327446 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-19 14:01:36.327457 | orchestrator | 2025-05-19 14:01:36.543695 | orchestrator | ok: Runtime: 0:02:12.689894 2025-05-19 14:01:36.561584 | 2025-05-19 14:01:36.561730 | TASK [Reboot manager] 2025-05-19 14:01:38.101834 | orchestrator | ok: Runtime: 0:00:00.993871 2025-05-19 14:01:38.120746 | 2025-05-19 14:01:38.120886 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 14:01:52.638959 | orchestrator | ok 2025-05-19 14:01:52.653889 | 2025-05-19 14:01:52.654038 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 14:02:52.704070 | orchestrator | ok 2025-05-19 14:02:52.713706 | 2025-05-19 14:02:52.713837 | TASK [Deploy manager + bootstrap nodes] 2025-05-19 14:02:55.073164 | orchestrator | 2025-05-19 14:02:55.073425 | orchestrator | # DEPLOY MANAGER 2025-05-19 14:02:55.073452 | orchestrator | 2025-05-19 14:02:55.073467 | orchestrator | + set -e 2025-05-19 14:02:55.073481 | orchestrator | + echo 2025-05-19 14:02:55.073494 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-19 14:02:55.073512 | orchestrator | + echo 2025-05-19 14:02:55.073564 | orchestrator | + cat /opt/manager-vars.sh 2025-05-19 14:02:55.076321 | orchestrator | export NUMBER_OF_NODES=6 2025-05-19 14:02:55.076358 | orchestrator | 2025-05-19 14:02:55.076378 | orchestrator | export CEPH_VERSION=reef 2025-05-19 14:02:55.076421 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-19 14:02:55.076434 | orchestrator | export MANAGER_VERSION=latest 2025-05-19 14:02:55.076462 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-19 14:02:55.076479 | orchestrator | 2025-05-19 14:02:55.076506 | orchestrator | export ARA=false 2025-05-19 14:02:55.076528 | orchestrator | export TEMPEST=false 2025-05-19 14:02:55.076549 | orchestrator | export IS_ZUUL=true 2025-05-19 14:02:55.076562 | orchestrator | 2025-05-19 14:02:55.076580 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:02:55.076592 | orchestrator | export EXTERNAL_API=false 2025-05-19 14:02:55.076603 | orchestrator | 2025-05-19 14:02:55.076624 | orchestrator | export IMAGE_USER=ubuntu 2025-05-19 14:02:55.076635 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-19 14:02:55.076646 | orchestrator | 2025-05-19 14:02:55.076660 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-19 14:02:55.076679 | orchestrator | 2025-05-19 14:02:55.076690 | orchestrator | + echo 2025-05-19 14:02:55.076702 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 14:02:55.077376 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 14:02:55.077408 | orchestrator | ++ INTERACTIVE=false 2025-05-19 14:02:55.077421 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 14:02:55.077435 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 14:02:55.077453 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 14:02:55.077466 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 14:02:55.077478 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 14:02:55.077513 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 14:02:55.077535 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 14:02:55.077556 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 14:02:55.077567 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 14:02:55.077578 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 14:02:55.077589 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 14:02:55.077611 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 14:02:55.077627 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 14:02:55.077638 | orchestrator | ++ export ARA=false 2025-05-19 14:02:55.077658 | orchestrator | ++ ARA=false 2025-05-19 14:02:55.077670 | orchestrator | ++ export TEMPEST=false 2025-05-19 14:02:55.077680 | orchestrator | ++ TEMPEST=false 2025-05-19 14:02:55.077691 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 14:02:55.077702 | orchestrator | ++ IS_ZUUL=true 2025-05-19 14:02:55.077713 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:02:55.077724 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:02:55.077735 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 14:02:55.077746 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 14:02:55.077760 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 14:02:55.077771 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 14:02:55.077782 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 14:02:55.077793 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 14:02:55.077804 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 14:02:55.077815 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 14:02:55.077855 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-19 14:02:55.129792 | orchestrator | + docker version 2025-05-19 14:02:55.382404 | orchestrator | Client: Docker Engine - Community 2025-05-19 14:02:55.382509 | orchestrator | Version: 27.5.1 2025-05-19 14:02:55.382528 | orchestrator | API version: 1.47 2025-05-19 14:02:55.382540 | orchestrator | Go version: go1.22.11 2025-05-19 14:02:55.382551 | orchestrator | Git commit: 9f9e405 2025-05-19 14:02:55.382565 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-19 14:02:55.382577 | orchestrator | OS/Arch: linux/amd64 2025-05-19 14:02:55.382588 | orchestrator | Context: default 2025-05-19 14:02:55.382599 | orchestrator | 2025-05-19 14:02:55.382611 | orchestrator | Server: Docker Engine - Community 2025-05-19 14:02:55.382622 | orchestrator | Engine: 2025-05-19 14:02:55.382633 | orchestrator | Version: 27.5.1 2025-05-19 14:02:55.382644 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-19 14:02:55.382655 | orchestrator | Go version: go1.22.11 2025-05-19 14:02:55.382666 | orchestrator | Git commit: 4c9b3b0 2025-05-19 14:02:55.382704 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-19 14:02:55.382716 | orchestrator | OS/Arch: linux/amd64 2025-05-19 14:02:55.382727 | orchestrator | Experimental: false 2025-05-19 14:02:55.382738 | orchestrator | containerd: 2025-05-19 14:02:55.382750 | orchestrator | Version: 1.7.27 2025-05-19 14:02:55.382761 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-19 14:02:55.382772 | orchestrator | runc: 2025-05-19 14:02:55.382783 | orchestrator | Version: 1.2.5 2025-05-19 14:02:55.382795 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-19 14:02:55.382806 | orchestrator | docker-init: 2025-05-19 14:02:55.382817 | orchestrator | Version: 0.19.0 2025-05-19 14:02:55.382858 | orchestrator | GitCommit: de40ad0 2025-05-19 14:02:55.386682 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-19 14:02:55.395641 | orchestrator | + set -e 2025-05-19 14:02:55.395679 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 14:02:55.395692 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 14:02:55.395703 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 14:02:55.395714 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 14:02:55.395725 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 14:02:55.395737 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 14:02:55.395790 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 14:02:55.395811 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 14:02:55.395859 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 14:02:55.395871 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 14:02:55.395883 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 14:02:55.395901 | orchestrator | ++ export ARA=false 2025-05-19 14:02:55.395912 | orchestrator | ++ ARA=false 2025-05-19 14:02:55.395923 | orchestrator | ++ export TEMPEST=false 2025-05-19 14:02:55.395934 | orchestrator | ++ TEMPEST=false 2025-05-19 14:02:55.395944 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 14:02:55.395967 | orchestrator | ++ IS_ZUUL=true 2025-05-19 14:02:55.395980 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:02:55.395995 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:02:55.396006 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 14:02:55.396017 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 14:02:55.396028 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 14:02:55.396038 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 14:02:55.396049 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 14:02:55.396060 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 14:02:55.396071 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 14:02:55.396081 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 14:02:55.396092 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 14:02:55.396107 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 14:02:55.396118 | orchestrator | ++ INTERACTIVE=false 2025-05-19 14:02:55.396129 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 14:02:55.396140 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 14:02:55.396478 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 14:02:55.396497 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 14:02:55.396508 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-19 14:02:55.403571 | orchestrator | + set -e 2025-05-19 14:02:55.403599 | orchestrator | + VERSION=reef 2025-05-19 14:02:55.404806 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-19 14:02:55.410523 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-19 14:02:55.410551 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-19 14:02:55.416055 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-19 14:02:55.422583 | orchestrator | + set -e 2025-05-19 14:02:55.422678 | orchestrator | + VERSION=2024.2 2025-05-19 14:02:55.422779 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-19 14:02:55.426625 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-19 14:02:55.426655 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-19 14:02:55.431654 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-19 14:02:55.432742 | orchestrator | ++ semver latest 7.0.0 2025-05-19 14:02:55.497121 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 14:02:55.497222 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 14:02:55.497242 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-19 14:02:55.497255 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-19 14:02:55.533234 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 14:02:55.535692 | orchestrator | + source /opt/venv/bin/activate 2025-05-19 14:02:55.537050 | orchestrator | ++ deactivate nondestructive 2025-05-19 14:02:55.537120 | orchestrator | ++ '[' -n '' ']' 2025-05-19 14:02:55.537135 | orchestrator | ++ '[' -n '' ']' 2025-05-19 14:02:55.537146 | orchestrator | ++ hash -r 2025-05-19 14:02:55.537402 | orchestrator | ++ '[' -n '' ']' 2025-05-19 14:02:55.537420 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-19 14:02:55.537431 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-19 14:02:55.537444 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-19 14:02:55.537472 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-19 14:02:55.537485 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-19 14:02:55.537502 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-19 14:02:55.537514 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-19 14:02:55.537530 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 14:02:55.537544 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 14:02:55.537561 | orchestrator | ++ export PATH 2025-05-19 14:02:55.538000 | orchestrator | ++ '[' -n '' ']' 2025-05-19 14:02:55.538163 | orchestrator | ++ '[' -z '' ']' 2025-05-19 14:02:55.538182 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-19 14:02:55.538211 | orchestrator | ++ PS1='(venv) ' 2025-05-19 14:02:55.538223 | orchestrator | ++ export PS1 2025-05-19 14:02:55.538235 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-19 14:02:55.538246 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-19 14:02:55.538290 | orchestrator | ++ hash -r 2025-05-19 14:02:55.538455 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-19 14:02:56.711412 | orchestrator | 2025-05-19 14:02:56.712349 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-19 14:02:56.712387 | orchestrator | 2025-05-19 14:02:56.712432 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 14:02:57.257643 | orchestrator | ok: [testbed-manager] 2025-05-19 14:02:57.257753 | orchestrator | 2025-05-19 14:02:57.257771 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 14:02:58.192661 | orchestrator | changed: [testbed-manager] 2025-05-19 14:02:58.192774 | orchestrator | 2025-05-19 14:02:58.192789 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-19 14:02:58.192802 | orchestrator | 2025-05-19 14:02:58.192814 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 14:03:00.549375 | orchestrator | ok: [testbed-manager] 2025-05-19 14:03:00.549496 | orchestrator | 2025-05-19 14:03:00.549513 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-19 14:03:05.108571 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-19 14:03:05.108685 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-19 14:03:05.108702 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-19 14:03:05.108717 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-19 14:03:05.108728 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-19 14:03:05.108739 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-19 14:03:05.108751 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-19 14:03:05.108762 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-19 14:03:05.108773 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-19 14:03:05.108784 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-19 14:03:05.108795 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-19 14:03:05.108806 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-19 14:03:05.108911 | orchestrator | 2025-05-19 14:03:05.108927 | orchestrator | TASK [Check status] ************************************************************ 2025-05-19 14:04:31.682934 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 14:04:31.683062 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-19 14:04:31.683079 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-19 14:04:31.683091 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-19 14:04:31.683117 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j623886013683.1543', 'results_file': '/home/dragon/.ansible_async/j623886013683.1543', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683138 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j762549265754.1568', 'results_file': '/home/dragon/.ansible_async/j762549265754.1568', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683154 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 14:04:31.683165 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-19 14:04:31.683177 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j827003237975.1593', 'results_file': '/home/dragon/.ansible_async/j827003237975.1593', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683188 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j451190790891.1625', 'results_file': '/home/dragon/.ansible_async/j451190790891.1625', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683207 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j396536864967.1658', 'results_file': '/home/dragon/.ansible_async/j396536864967.1658', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683219 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j589650380145.1690', 'results_file': '/home/dragon/.ansible_async/j589650380145.1690', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683230 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 14:04:31.683241 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-19 14:04:31.683261 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j522280737267.1722', 'results_file': '/home/dragon/.ansible_async/j522280737267.1722', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683280 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j251063369386.1754', 'results_file': '/home/dragon/.ansible_async/j251063369386.1754', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683299 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j121999422693.1786', 'results_file': '/home/dragon/.ansible_async/j121999422693.1786', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683318 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j816821684364.1819', 'results_file': '/home/dragon/.ansible_async/j816821684364.1819', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683367 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j217064355218.1853', 'results_file': '/home/dragon/.ansible_async/j217064355218.1853', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683388 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j394809629309.1885', 'results_file': '/home/dragon/.ansible_async/j394809629309.1885', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-19 14:04:31.683408 | orchestrator | 2025-05-19 14:04:31.683453 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-19 14:04:31.725607 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:31.725704 | orchestrator | 2025-05-19 14:04:31.725720 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-19 14:04:32.262562 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:32.262693 | orchestrator | 2025-05-19 14:04:32.262713 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-19 14:04:32.614840 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:32.614946 | orchestrator | 2025-05-19 14:04:32.614965 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 14:04:32.984886 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:32.985003 | orchestrator | 2025-05-19 14:04:32.985021 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-19 14:04:33.049496 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:04:33.049589 | orchestrator | 2025-05-19 14:04:33.049603 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-19 14:04:33.454436 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:33.454540 | orchestrator | 2025-05-19 14:04:33.454556 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-19 14:04:33.562432 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:04:33.562527 | orchestrator | 2025-05-19 14:04:33.562541 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-19 14:04:33.562554 | orchestrator | 2025-05-19 14:04:33.562565 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 14:04:35.444558 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:35.444664 | orchestrator | 2025-05-19 14:04:35.444681 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-19 14:04:35.571809 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-19 14:04:35.571912 | orchestrator | 2025-05-19 14:04:35.571927 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-19 14:04:35.632653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-19 14:04:35.632792 | orchestrator | 2025-05-19 14:04:35.632808 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-19 14:04:36.800159 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-19 14:04:36.800314 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-19 14:04:36.800341 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-19 14:04:36.800354 | orchestrator | 2025-05-19 14:04:36.800367 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-19 14:04:38.750347 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-19 14:04:38.750460 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-19 14:04:38.750478 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-19 14:04:38.750491 | orchestrator | 2025-05-19 14:04:38.750523 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-19 14:04:39.401274 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:39.401382 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:39.401399 | orchestrator | 2025-05-19 14:04:39.401412 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-19 14:04:40.098694 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:40.098897 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:40.098917 | orchestrator | 2025-05-19 14:04:40.098930 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-19 14:04:40.165244 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:04:40.165334 | orchestrator | 2025-05-19 14:04:40.165345 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-19 14:04:40.546882 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:40.546985 | orchestrator | 2025-05-19 14:04:40.547001 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-19 14:04:40.610000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-19 14:04:40.610179 | orchestrator | 2025-05-19 14:04:40.610194 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-19 14:04:41.906533 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:41.906641 | orchestrator | 2025-05-19 14:04:41.906654 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-19 14:04:42.719440 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:42.719549 | orchestrator | 2025-05-19 14:04:42.719564 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-19 14:04:46.212280 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:46.212418 | orchestrator | 2025-05-19 14:04:46.212436 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-19 14:04:46.326555 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-19 14:04:46.326656 | orchestrator | 2025-05-19 14:04:46.326671 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-19 14:04:46.390134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:04:46.390230 | orchestrator | 2025-05-19 14:04:46.390244 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-19 14:04:49.123711 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:49.123870 | orchestrator | 2025-05-19 14:04:49.123890 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 14:04:49.243460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-19 14:04:49.243575 | orchestrator | 2025-05-19 14:04:49.243599 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-19 14:04:50.370208 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-19 14:04:50.370314 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-19 14:04:50.370331 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-19 14:04:50.370343 | orchestrator | 2025-05-19 14:04:50.370355 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-19 14:04:50.449560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-19 14:04:50.449662 | orchestrator | 2025-05-19 14:04:50.449678 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-19 14:04:51.100685 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-19 14:04:51.100833 | orchestrator | 2025-05-19 14:04:51.100851 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-19 14:04:51.753262 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:51.753368 | orchestrator | 2025-05-19 14:04:51.753386 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 14:04:52.428482 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:52.428608 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:52.428625 | orchestrator | 2025-05-19 14:04:52.428639 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-19 14:04:52.863547 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:52.863645 | orchestrator | 2025-05-19 14:04:52.863662 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-19 14:04:53.250573 | orchestrator | ok: [testbed-manager] 2025-05-19 14:04:53.250678 | orchestrator | 2025-05-19 14:04:53.250695 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-19 14:04:53.307085 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:04:53.307188 | orchestrator | 2025-05-19 14:04:53.307211 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-19 14:04:53.970336 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:53.970453 | orchestrator | 2025-05-19 14:04:53.970472 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 14:04:54.039297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-19 14:04:54.039427 | orchestrator | 2025-05-19 14:04:54.039455 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-19 14:04:54.871880 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-19 14:04:54.871982 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-19 14:04:54.871996 | orchestrator | 2025-05-19 14:04:54.872030 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-19 14:04:55.552071 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-19 14:04:55.552163 | orchestrator | 2025-05-19 14:04:55.552179 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-19 14:04:56.286358 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:56.286455 | orchestrator | 2025-05-19 14:04:56.286470 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-19 14:04:56.336315 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:04:56.336416 | orchestrator | 2025-05-19 14:04:56.336431 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-19 14:04:57.018095 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:57.018198 | orchestrator | 2025-05-19 14:04:57.018214 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 14:04:58.879196 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:58.879310 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:58.879326 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:04:58.879340 | orchestrator | changed: [testbed-manager] 2025-05-19 14:04:58.879353 | orchestrator | 2025-05-19 14:04:58.879366 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-19 14:05:05.025073 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-19 14:05:05.025192 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-19 14:05:05.025210 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-19 14:05:05.025222 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-19 14:05:05.025234 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-19 14:05:05.025245 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-19 14:05:05.025256 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-19 14:05:05.025267 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-19 14:05:05.025278 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-19 14:05:05.025290 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-19 14:05:05.025301 | orchestrator | 2025-05-19 14:05:05.025314 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-19 14:05:05.688401 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-19 14:05:05.688501 | orchestrator | 2025-05-19 14:05:05.688518 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-19 14:05:05.776276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-19 14:05:05.776376 | orchestrator | 2025-05-19 14:05:05.776392 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-19 14:05:06.493987 | orchestrator | changed: [testbed-manager] 2025-05-19 14:05:06.494166 | orchestrator | 2025-05-19 14:05:06.494214 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-19 14:05:07.144613 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:07.144720 | orchestrator | 2025-05-19 14:05:07.144791 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-19 14:05:07.871155 | orchestrator | changed: [testbed-manager] 2025-05-19 14:05:07.871251 | orchestrator | 2025-05-19 14:05:07.871268 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-19 14:05:10.084857 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:10.084968 | orchestrator | 2025-05-19 14:05:10.084986 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-19 14:05:11.145581 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:11.145691 | orchestrator | 2025-05-19 14:05:11.145708 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-19 14:05:33.285868 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-19 14:05:33.285975 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:33.285992 | orchestrator | 2025-05-19 14:05:33.286005 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-19 14:05:33.348747 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:33.348831 | orchestrator | 2025-05-19 14:05:33.348846 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-19 14:05:33.348859 | orchestrator | 2025-05-19 14:05:33.348871 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-19 14:05:33.392548 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:33.392599 | orchestrator | 2025-05-19 14:05:33.392613 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 14:05:33.444995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-19 14:05:33.445056 | orchestrator | 2025-05-19 14:05:33.445069 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-19 14:05:34.230539 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:34.230630 | orchestrator | 2025-05-19 14:05:34.230647 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-19 14:05:34.299380 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:34.299482 | orchestrator | 2025-05-19 14:05:34.299510 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-19 14:05:34.349448 | orchestrator | ok: [testbed-manager] => { 2025-05-19 14:05:34.349517 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-19 14:05:34.349531 | orchestrator | } 2025-05-19 14:05:34.349543 | orchestrator | 2025-05-19 14:05:34.349554 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-19 14:05:34.936987 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:34.937098 | orchestrator | 2025-05-19 14:05:34.937126 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-19 14:05:35.714906 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:35.714997 | orchestrator | 2025-05-19 14:05:35.715014 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-19 14:05:35.787222 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:35.787299 | orchestrator | 2025-05-19 14:05:35.787313 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-19 14:05:35.841851 | orchestrator | ok: [testbed-manager] => { 2025-05-19 14:05:35.841898 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-19 14:05:35.841915 | orchestrator | } 2025-05-19 14:05:35.841928 | orchestrator | 2025-05-19 14:05:35.841940 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-19 14:05:35.904002 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:35.904066 | orchestrator | 2025-05-19 14:05:35.904080 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-19 14:05:35.958733 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:35.958799 | orchestrator | 2025-05-19 14:05:35.958813 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-19 14:05:36.008623 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:36.008721 | orchestrator | 2025-05-19 14:05:36.008765 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-19 14:05:36.096041 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:36.096145 | orchestrator | 2025-05-19 14:05:36.096160 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-19 14:05:36.147296 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:36.147371 | orchestrator | 2025-05-19 14:05:36.147382 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-19 14:05:36.205211 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:05:36.205250 | orchestrator | 2025-05-19 14:05:36.205262 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 14:05:37.436618 | orchestrator | changed: [testbed-manager] 2025-05-19 14:05:37.436760 | orchestrator | 2025-05-19 14:05:37.436778 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-19 14:05:37.498614 | orchestrator | ok: [testbed-manager] 2025-05-19 14:05:37.498716 | orchestrator | 2025-05-19 14:05:37.498731 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-19 14:06:37.557185 | orchestrator | Pausing for 60 seconds 2025-05-19 14:06:37.557291 | orchestrator | changed: [testbed-manager] 2025-05-19 14:06:37.557309 | orchestrator | 2025-05-19 14:06:37.557323 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-19 14:06:37.615293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-19 14:06:37.615363 | orchestrator | 2025-05-19 14:06:37.615377 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-19 14:10:06.868656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-19 14:10:06.868756 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-19 14:10:06.868772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-19 14:10:06.868784 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-19 14:10:06.868796 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-19 14:10:06.868807 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-19 14:10:06.868818 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-19 14:10:06.868829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-19 14:10:06.868840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-19 14:10:06.868852 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-19 14:10:06.868863 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-19 14:10:06.868874 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-19 14:10:06.868885 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-19 14:10:06.868896 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-19 14:10:06.868908 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-19 14:10:06.868919 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-19 14:10:06.868930 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-19 14:10:06.868959 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-19 14:10:06.868970 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-19 14:10:06.869003 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-19 14:10:06.869016 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:06.869030 | orchestrator | 2025-05-19 14:10:06.869042 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-19 14:10:06.869053 | orchestrator | 2025-05-19 14:10:06.869065 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 14:10:08.777862 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:08.777954 | orchestrator | 2025-05-19 14:10:08.777971 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-19 14:10:08.872440 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-19 14:10:08.872500 | orchestrator | 2025-05-19 14:10:08.872508 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-19 14:10:08.938186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:10:08.938285 | orchestrator | 2025-05-19 14:10:08.938299 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-19 14:10:10.477943 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:10.478122 | orchestrator | 2025-05-19 14:10:10.478142 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-19 14:10:10.531141 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:10.531273 | orchestrator | 2025-05-19 14:10:10.531291 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-19 14:10:10.625697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-19 14:10:10.625790 | orchestrator | 2025-05-19 14:10:10.625804 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-19 14:10:13.428297 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-19 14:10:13.428412 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-19 14:10:13.428427 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-19 14:10:13.428439 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-19 14:10:13.428450 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-19 14:10:13.428462 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-19 14:10:13.428473 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-19 14:10:13.428488 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-19 14:10:13.428500 | orchestrator | 2025-05-19 14:10:13.428512 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-19 14:10:14.083346 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:14.083453 | orchestrator | 2025-05-19 14:10:14.083470 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-19 14:10:14.166589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-19 14:10:14.166709 | orchestrator | 2025-05-19 14:10:14.166730 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-19 14:10:15.385723 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-19 14:10:15.385843 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-19 14:10:15.385867 | orchestrator | 2025-05-19 14:10:15.385888 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-19 14:10:16.002468 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:16.002574 | orchestrator | 2025-05-19 14:10:16.002591 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-19 14:10:16.064150 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:10:16.064271 | orchestrator | 2025-05-19 14:10:16.064288 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-19 14:10:16.125694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-19 14:10:16.125824 | orchestrator | 2025-05-19 14:10:16.125839 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-19 14:10:17.514668 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:10:17.514792 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:10:17.514808 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:17.514821 | orchestrator | 2025-05-19 14:10:17.514846 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-19 14:10:18.139995 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:18.140093 | orchestrator | 2025-05-19 14:10:18.140107 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-19 14:10:18.223684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-19 14:10:18.223785 | orchestrator | 2025-05-19 14:10:18.223800 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-19 14:10:19.459108 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:10:19.459273 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:10:19.459292 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:19.459306 | orchestrator | 2025-05-19 14:10:19.459319 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-19 14:10:20.109772 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:20.109876 | orchestrator | 2025-05-19 14:10:20.109892 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-19 14:10:20.205869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-19 14:10:20.205971 | orchestrator | 2025-05-19 14:10:20.205987 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-19 14:10:20.846423 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:20.846528 | orchestrator | 2025-05-19 14:10:20.846545 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-19 14:10:21.267486 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:21.267588 | orchestrator | 2025-05-19 14:10:21.267605 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-19 14:10:22.507340 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-19 14:10:22.507419 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-19 14:10:22.507427 | orchestrator | 2025-05-19 14:10:22.507436 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-19 14:10:23.140856 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:23.140963 | orchestrator | 2025-05-19 14:10:23.140978 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-19 14:10:23.529771 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:23.529874 | orchestrator | 2025-05-19 14:10:23.529889 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-19 14:10:23.877285 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:23.877401 | orchestrator | 2025-05-19 14:10:23.877418 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-19 14:10:23.938330 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:10:23.938369 | orchestrator | 2025-05-19 14:10:23.938383 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-19 14:10:24.007321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-19 14:10:24.007382 | orchestrator | 2025-05-19 14:10:24.007396 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-19 14:10:24.052835 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:24.052870 | orchestrator | 2025-05-19 14:10:24.052883 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-19 14:10:26.161634 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-19 14:10:26.161738 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-19 14:10:26.161754 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-19 14:10:26.161766 | orchestrator | 2025-05-19 14:10:26.161779 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-19 14:10:26.904616 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:26.904712 | orchestrator | 2025-05-19 14:10:26.904725 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-19 14:10:27.609474 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:27.609577 | orchestrator | 2025-05-19 14:10:27.609591 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-19 14:10:28.327163 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:28.327342 | orchestrator | 2025-05-19 14:10:28.327361 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-19 14:10:28.412822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-19 14:10:28.412922 | orchestrator | 2025-05-19 14:10:28.412937 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-19 14:10:28.459610 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:28.459690 | orchestrator | 2025-05-19 14:10:28.459705 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-19 14:10:29.183661 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-19 14:10:29.183781 | orchestrator | 2025-05-19 14:10:29.183798 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-19 14:10:29.257135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-19 14:10:29.257291 | orchestrator | 2025-05-19 14:10:29.257309 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-19 14:10:29.938801 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:29.938908 | orchestrator | 2025-05-19 14:10:29.938926 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-19 14:10:30.568373 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:30.568465 | orchestrator | 2025-05-19 14:10:30.568479 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-19 14:10:30.627128 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:10:30.627275 | orchestrator | 2025-05-19 14:10:30.627293 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-19 14:10:30.693676 | orchestrator | ok: [testbed-manager] 2025-05-19 14:10:30.693787 | orchestrator | 2025-05-19 14:10:30.693804 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-19 14:10:31.508259 | orchestrator | changed: [testbed-manager] 2025-05-19 14:10:31.508366 | orchestrator | 2025-05-19 14:10:31.508382 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-19 14:11:14.122565 | orchestrator | changed: [testbed-manager] 2025-05-19 14:11:14.122691 | orchestrator | 2025-05-19 14:11:14.122710 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-19 14:11:14.777037 | orchestrator | ok: [testbed-manager] 2025-05-19 14:11:14.777201 | orchestrator | 2025-05-19 14:11:14.777223 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-19 14:11:17.609767 | orchestrator | changed: [testbed-manager] 2025-05-19 14:11:17.609877 | orchestrator | 2025-05-19 14:11:17.609895 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-19 14:11:17.671851 | orchestrator | ok: [testbed-manager] 2025-05-19 14:11:17.671941 | orchestrator | 2025-05-19 14:11:17.671955 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 14:11:17.671968 | orchestrator | 2025-05-19 14:11:17.671980 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-19 14:11:17.729063 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:11:17.729194 | orchestrator | 2025-05-19 14:11:17.729220 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-19 14:12:17.786324 | orchestrator | Pausing for 60 seconds 2025-05-19 14:12:17.786397 | orchestrator | changed: [testbed-manager] 2025-05-19 14:12:17.786407 | orchestrator | 2025-05-19 14:12:17.786416 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-19 14:12:21.625627 | orchestrator | changed: [testbed-manager] 2025-05-19 14:12:21.625771 | orchestrator | 2025-05-19 14:12:21.625790 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-19 14:13:03.095146 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-19 14:13:03.095248 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-19 14:13:03.095264 | orchestrator | changed: [testbed-manager] 2025-05-19 14:13:03.095277 | orchestrator | 2025-05-19 14:13:03.095289 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-19 14:13:11.159105 | orchestrator | changed: [testbed-manager] 2025-05-19 14:13:11.159229 | orchestrator | 2025-05-19 14:13:11.159248 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-19 14:13:11.248534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-19 14:13:11.248636 | orchestrator | 2025-05-19 14:13:11.248650 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 14:13:11.248663 | orchestrator | 2025-05-19 14:13:11.248675 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-19 14:13:11.289083 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:13:11.289180 | orchestrator | 2025-05-19 14:13:11.289193 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:11.289205 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-19 14:13:11.289216 | orchestrator | 2025-05-19 14:13:11.401712 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 14:13:11.401817 | orchestrator | + deactivate 2025-05-19 14:13:11.401834 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-19 14:13:11.401849 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 14:13:11.401861 | orchestrator | + export PATH 2025-05-19 14:13:11.401930 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-19 14:13:11.401946 | orchestrator | + '[' -n '' ']' 2025-05-19 14:13:11.401958 | orchestrator | + hash -r 2025-05-19 14:13:11.401969 | orchestrator | + '[' -n '' ']' 2025-05-19 14:13:11.401981 | orchestrator | + unset VIRTUAL_ENV 2025-05-19 14:13:11.401992 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-19 14:13:11.402003 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-19 14:13:11.402014 | orchestrator | + unset -f deactivate 2025-05-19 14:13:11.402079 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-19 14:13:11.406850 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 14:13:11.406925 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 14:13:11.406939 | orchestrator | + local max_attempts=60 2025-05-19 14:13:11.406950 | orchestrator | + local name=ceph-ansible 2025-05-19 14:13:11.406961 | orchestrator | + local attempt_num=1 2025-05-19 14:13:11.407584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 14:13:11.441242 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:13:11.441301 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 14:13:11.441313 | orchestrator | + local max_attempts=60 2025-05-19 14:13:11.441325 | orchestrator | + local name=kolla-ansible 2025-05-19 14:13:11.441336 | orchestrator | + local attempt_num=1 2025-05-19 14:13:11.441347 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 14:13:11.472766 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:13:11.472840 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 14:13:11.472854 | orchestrator | + local max_attempts=60 2025-05-19 14:13:11.472866 | orchestrator | + local name=osism-ansible 2025-05-19 14:13:11.473014 | orchestrator | + local attempt_num=1 2025-05-19 14:13:11.473040 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 14:13:11.501227 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:13:11.501263 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 14:13:11.501276 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 14:13:12.231773 | orchestrator | ++ semver latest 9.0.0 2025-05-19 14:13:12.287168 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 14:13:12.287245 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 14:13:12.287260 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-05-19 14:13:12.287304 | orchestrator | + local max_attempts=60 2025-05-19 14:13:12.287316 | orchestrator | + local name=netbox-netbox-1 2025-05-19 14:13:12.287328 | orchestrator | + local attempt_num=1 2025-05-19 14:13:12.288066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-05-19 14:13:12.326646 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:13:12.326684 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-05-19 14:13:12.334849 | orchestrator | + set -e 2025-05-19 14:13:12.334924 | orchestrator | + osism manage netbox --parallel 4 2025-05-19 14:13:14.247026 | orchestrator | 2025-05-19 14:13:14 | INFO  | It takes a moment until task eb18f3c4-001f-4db1-95b9-f6e69bdc16b9 (netbox-manager) has been started and output is visible here. 2025-05-19 14:13:16.572129 | orchestrator | 2025-05-19 14:13:16 | INFO  | Wait for NetBox service 2025-05-19 14:13:18.519236 | orchestrator | 2025-05-19 14:13:18.519331 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-05-19 14:13:18.596923 | orchestrator | 2025-05-19 14:13:18.597796 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-05-19 14:13:19.770483 | orchestrator | ok: [localhost] 2025-05-19 14:13:19.771151 | orchestrator | 2025-05-19 14:13:19.772017 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:19.772381 | orchestrator | 2025-05-19 14:13:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:19.772406 | orchestrator | 2025-05-19 14:13:19 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:19.772738 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:20.384469 | orchestrator | 2025-05-19 14:13:20 | INFO  | Manage devicetypes 2025-05-19 14:13:24.589984 | orchestrator | 2025-05-19 14:13:24 | INFO  | Manage moduletypes 2025-05-19 14:13:24.727649 | orchestrator | 2025-05-19 14:13:24 | INFO  | Manage resources 2025-05-19 14:13:24.742353 | orchestrator | 2025-05-19 14:13:24 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-05-19 14:13:25.790565 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-05-19 14:13:25.792740 | orchestrator | Manufacturer queued for addition: Edgecore 2025-05-19 14:13:25.793228 | orchestrator | Manufacturer queued for addition: Other 2025-05-19 14:13:25.794539 | orchestrator | Manufacturer Created: Edgecore - 2 2025-05-19 14:13:25.795357 | orchestrator | Manufacturer Created: Other - 3 2025-05-19 14:13:25.796416 | orchestrator | Device Type Created: Edgecore - 5835-54T-O-AC-F - 2 2025-05-19 14:13:25.797775 | orchestrator | Interface Template Created: Ethernet0 - 10GBASE-T (10GE) - 2 - 1 2025-05-19 14:13:25.798110 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 2 2025-05-19 14:13:25.800723 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 3 2025-05-19 14:13:25.801283 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 4 2025-05-19 14:13:25.802886 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 5 2025-05-19 14:13:25.804522 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 6 2025-05-19 14:13:25.805498 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 7 2025-05-19 14:13:25.806564 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 8 2025-05-19 14:13:25.807514 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 9 2025-05-19 14:13:25.808450 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 10 2025-05-19 14:13:25.809578 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 11 2025-05-19 14:13:25.810518 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 12 2025-05-19 14:13:25.811225 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 13 2025-05-19 14:13:25.812250 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 14 2025-05-19 14:13:25.812740 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 15 2025-05-19 14:13:25.813665 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 16 2025-05-19 14:13:25.814459 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 17 2025-05-19 14:13:25.815428 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 18 2025-05-19 14:13:25.816200 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 19 2025-05-19 14:13:25.816744 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 20 2025-05-19 14:13:25.817756 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 21 2025-05-19 14:13:25.818457 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 22 2025-05-19 14:13:25.819507 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 23 2025-05-19 14:13:25.819929 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 24 2025-05-19 14:13:25.821039 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 25 2025-05-19 14:13:25.821678 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 26 2025-05-19 14:13:25.822368 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 27 2025-05-19 14:13:25.823068 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 28 2025-05-19 14:13:25.823781 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 29 2025-05-19 14:13:25.824048 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 30 2025-05-19 14:13:25.824776 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 31 2025-05-19 14:13:25.825184 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 32 2025-05-19 14:13:25.826001 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 33 2025-05-19 14:13:25.826313 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 34 2025-05-19 14:13:25.827037 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 35 2025-05-19 14:13:25.827666 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 36 2025-05-19 14:13:25.828065 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 37 2025-05-19 14:13:25.828392 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 38 2025-05-19 14:13:25.829238 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 39 2025-05-19 14:13:25.829424 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 40 2025-05-19 14:13:25.830087 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 41 2025-05-19 14:13:25.830483 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 42 2025-05-19 14:13:25.830961 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 43 2025-05-19 14:13:25.831433 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 44 2025-05-19 14:13:25.832020 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 45 2025-05-19 14:13:25.832502 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 46 2025-05-19 14:13:25.832829 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 47 2025-05-19 14:13:25.834492 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 48 2025-05-19 14:13:25.835017 | orchestrator | Interface Template Created: Ethernet48 - QSFP28 (100GE) - 2 - 49 2025-05-19 14:13:25.835543 | orchestrator | Interface Template Created: Ethernet52 - QSFP28 (100GE) - 2 - 50 2025-05-19 14:13:25.836282 | orchestrator | Interface Template Created: Ethernet56 - QSFP28 (100GE) - 2 - 51 2025-05-19 14:13:25.836643 | orchestrator | Interface Template Created: Ethernet60 - QSFP28 (100GE) - 2 - 52 2025-05-19 14:13:25.837277 | orchestrator | Interface Template Created: Ethernet64 - QSFP28 (100GE) - 2 - 53 2025-05-19 14:13:25.837609 | orchestrator | Interface Template Created: Ethernet68 - QSFP28 (100GE) - 2 - 54 2025-05-19 14:13:25.838103 | orchestrator | Interface Template Created: Ethernet72 - QSFP28 (100GE) - 2 - 55 2025-05-19 14:13:25.839262 | orchestrator | Interface Template Created: Ethernet76 - QSFP28 (100GE) - 2 - 56 2025-05-19 14:13:25.839283 | orchestrator | Interface Template Created: eth0 - 1000BASE-T (1GE) - 2 - 57 2025-05-19 14:13:25.839373 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-05-19 14:13:25.840292 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-05-19 14:13:25.840494 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-05-19 14:13:25.841587 | orchestrator | Device Type Created: Edgecore - 7726-32X-O-AC-B - 3 2025-05-19 14:13:25.841611 | orchestrator | Interface Template Created: Ethernet0 - QSFP28 (100GE) - 3 - 58 2025-05-19 14:13:25.844585 | orchestrator | Interface Template Created: Ethernet4 - QSFP28 (100GE) - 3 - 59 2025-05-19 14:13:25.844611 | orchestrator | Interface Template Created: Ethernet8 - QSFP28 (100GE) - 3 - 60 2025-05-19 14:13:25.844623 | orchestrator | Interface Template Created: Ethernet12 - QSFP28 (100GE) - 3 - 61 2025-05-19 14:13:25.844894 | orchestrator | Interface Template Created: Ethernet16 - QSFP28 (100GE) - 3 - 62 2025-05-19 14:13:25.845230 | orchestrator | Interface Template Created: Ethernet20 - QSFP28 (100GE) - 3 - 63 2025-05-19 14:13:25.845671 | orchestrator | Interface Template Created: Ethernet24 - QSFP28 (100GE) - 3 - 64 2025-05-19 14:13:25.846137 | orchestrator | Interface Template Created: Ethernet28 - QSFP28 (100GE) - 3 - 65 2025-05-19 14:13:25.846640 | orchestrator | Interface Template Created: Ethernet32 - QSFP28 (100GE) - 3 - 66 2025-05-19 14:13:25.846935 | orchestrator | Interface Template Created: Ethernet36 - QSFP28 (100GE) - 3 - 67 2025-05-19 14:13:25.847322 | orchestrator | Interface Template Created: Ethernet40 - QSFP28 (100GE) - 3 - 68 2025-05-19 14:13:25.847684 | orchestrator | Interface Template Created: Ethernet44 - QSFP28 (100GE) - 3 - 69 2025-05-19 14:13:25.849597 | orchestrator | Interface Template Created: Ethernet48 - QSFP28 (100GE) - 3 - 70 2025-05-19 14:13:25.849619 | orchestrator | Interface Template Created: Ethernet52 - QSFP28 (100GE) - 3 - 71 2025-05-19 14:13:25.849630 | orchestrator | Interface Template Created: Ethernet56 - QSFP28 (100GE) - 3 - 72 2025-05-19 14:13:25.849641 | orchestrator | Interface Template Created: Ethernet60 - QSFP28 (100GE) - 3 - 73 2025-05-19 14:13:25.849652 | orchestrator | Interface Template Created: Ethernet64 - QSFP28 (100GE) - 3 - 74 2025-05-19 14:13:25.849663 | orchestrator | Interface Template Created: Ethernet68 - QSFP28 (100GE) - 3 - 75 2025-05-19 14:13:25.849883 | orchestrator | Interface Template Created: Ethernet72 - QSFP28 (100GE) - 3 - 76 2025-05-19 14:13:25.850186 | orchestrator | Interface Template Created: Ethernet76 - QSFP28 (100GE) - 3 - 77 2025-05-19 14:13:25.850322 | orchestrator | Interface Template Created: Ethernet80 - QSFP28 (100GE) - 3 - 78 2025-05-19 14:13:25.850580 | orchestrator | Interface Template Created: Ethernet84 - QSFP28 (100GE) - 3 - 79 2025-05-19 14:13:25.850823 | orchestrator | Interface Template Created: Ethernet88 - QSFP28 (100GE) - 3 - 80 2025-05-19 14:13:25.851100 | orchestrator | Interface Template Created: Ethernet92 - QSFP28 (100GE) - 3 - 81 2025-05-19 14:13:25.851324 | orchestrator | Interface Template Created: Ethernet96 - QSFP28 (100GE) - 3 - 82 2025-05-19 14:13:25.851539 | orchestrator | Interface Template Created: Ethernet100 - QSFP28 (100GE) - 3 - 83 2025-05-19 14:13:25.851760 | orchestrator | Interface Template Created: Ethernet104 - QSFP28 (100GE) - 3 - 84 2025-05-19 14:13:25.852031 | orchestrator | Interface Template Created: Ethernet108 - QSFP28 (100GE) - 3 - 85 2025-05-19 14:13:25.852296 | orchestrator | Interface Template Created: Ethernet112 - QSFP28 (100GE) - 3 - 86 2025-05-19 14:13:25.852491 | orchestrator | Interface Template Created: Ethernet116 - QSFP28 (100GE) - 3 - 87 2025-05-19 14:13:25.852728 | orchestrator | Interface Template Created: Ethernet120 - QSFP28 (100GE) - 3 - 88 2025-05-19 14:13:25.853003 | orchestrator | Interface Template Created: Ethernet124 - QSFP28 (100GE) - 3 - 89 2025-05-19 14:13:25.853274 | orchestrator | Interface Template Created: eth0 - 1000BASE-T (1GE) - 3 - 90 2025-05-19 14:13:25.853523 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-05-19 14:13:25.853709 | orchestrator | Power Port Template Created: PS2 - C14 - 3 - 4 2025-05-19 14:13:25.853995 | orchestrator | Console Port Template Created: Console - RJ-45 - 3 - 2 2025-05-19 14:13:25.854234 | orchestrator | Device Type Created: Other - Baremetal-Device - 4 2025-05-19 14:13:25.854460 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 91 2025-05-19 14:13:25.854703 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 92 2025-05-19 14:13:25.855048 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 5 2025-05-19 14:13:25.855337 | orchestrator | Device Type Created: Other - Manager - 5 2025-05-19 14:13:25.855585 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 93 2025-05-19 14:13:25.855828 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 94 2025-05-19 14:13:25.856136 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 95 2025-05-19 14:13:25.856482 | orchestrator | Interface Template Created: Ethernet3 - QSFP28 (100GE) - 5 - 96 2025-05-19 14:13:25.856703 | orchestrator | Interface Template Created: Ethernet4 - QSFP28 (100GE) - 5 - 97 2025-05-19 14:13:25.856994 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 6 2025-05-19 14:13:25.857268 | orchestrator | Device Type Created: Other - Node - 6 2025-05-19 14:13:25.857541 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 98 2025-05-19 14:13:25.857746 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 99 2025-05-19 14:13:25.858063 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 100 2025-05-19 14:13:25.858286 | orchestrator | Interface Template Created: Ethernet3 - QSFP28 (100GE) - 6 - 101 2025-05-19 14:13:25.858494 | orchestrator | Interface Template Created: Ethernet4 - QSFP28 (100GE) - 6 - 102 2025-05-19 14:13:25.858736 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 7 2025-05-19 14:13:25.858991 | orchestrator | Device Type Created: Other - Baremetal-Housing - 7 2025-05-19 14:13:25.859239 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 7 - 103 2025-05-19 14:13:25.859462 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 7 - 104 2025-05-19 14:13:25.859654 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 7 - 105 2025-05-19 14:13:25.859958 | orchestrator | Interface Template Created: Ethernet3 - QSFP28 (100GE) - 7 - 106 2025-05-19 14:13:25.860164 | orchestrator | Interface Template Created: Ethernet4 - QSFP28 (100GE) - 7 - 107 2025-05-19 14:13:25.860416 | orchestrator | Power Port Template Created: PS1 - C14 - 7 - 8 2025-05-19 14:13:25.860607 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-05-19 14:13:25.860830 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-05-19 14:13:25.861217 | orchestrator | 2025-05-19 14:13:25.861374 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-05-19 14:13:25.862170 | orchestrator | 2025-05-19 14:13:25.862188 | orchestrator | TASK [Manage NetBox resource Testbed of type tenant] *************************** 2025-05-19 14:13:27.102174 | orchestrator | changed: [localhost] 2025-05-19 14:13:27.102568 | orchestrator | 2025-05-19 14:13:27.103598 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-05-19 14:13:28.413974 | orchestrator | changed: [localhost] 2025-05-19 14:13:28.416156 | orchestrator | 2025-05-19 14:13:28.417016 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-05-19 14:13:29.777961 | orchestrator | changed: [localhost] 2025-05-19 14:13:29.779218 | orchestrator | 2025-05-19 14:13:29.780117 | orchestrator | TASK [Manage NetBox resource OOB Testbed of type vlan] ************************* 2025-05-19 14:13:31.285123 | orchestrator | changed: [localhost] 2025-05-19 14:13:31.285233 | orchestrator | 2025-05-19 14:13:31.285680 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-19 14:13:33.011711 | orchestrator | changed: [localhost] 2025-05-19 14:13:33.013064 | orchestrator | 2025-05-19 14:13:33.013152 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-19 14:13:34.231530 | orchestrator | changed: [localhost] 2025-05-19 14:13:34.234765 | orchestrator | 2025-05-19 14:13:34.234804 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-19 14:13:35.407116 | orchestrator | changed: [localhost] 2025-05-19 14:13:35.411842 | orchestrator | 2025-05-19 14:13:35.413078 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-19 14:13:36.675915 | orchestrator | changed: [localhost] 2025-05-19 14:13:36.677328 | orchestrator | 2025-05-19 14:13:36.678625 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-19 14:13:37.729204 | orchestrator | changed: [localhost] 2025-05-19 14:13:37.729318 | orchestrator | 2025-05-19 14:13:37.729343 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:37.729367 | orchestrator | localhost : ok=9 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:37.729381 | orchestrator | 2025-05-19 14:13:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:37.729438 | orchestrator | 2025-05-19 14:13:37 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:37.959663 | orchestrator | 2025-05-19 14:13:37 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-05-19 14:13:39.040700 | orchestrator | 2025-05-19 14:13:39.040850 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-05-19 14:13:39.092012 | orchestrator | 2025-05-19 14:13:39.092406 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-05-19 14:13:40.530699 | orchestrator | changed: [localhost] 2025-05-19 14:13:40.531709 | orchestrator | 2025-05-19 14:13:40.533595 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-05-19 14:13:41.876373 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not resolve id of device_type: 7726-32x-o-ac-b"} 2025-05-19 14:13:41.877107 | orchestrator | 2025-05-19 14:13:41.877947 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:41.878772 | orchestrator | 2025-05-19 14:13:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:41.878858 | orchestrator | 2025-05-19 14:13:41 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:41.879388 | orchestrator | localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:42.120988 | orchestrator | 2025-05-19 14:13:42 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-05-19 14:13:42.138069 | orchestrator | 2025-05-19 14:13:42 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-05-19 14:13:42.155778 | orchestrator | 2025-05-19 14:13:42 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-05-19 14:13:42.156285 | orchestrator | 2025-05-19 14:13:42 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-05-19 14:13:43.341235 | orchestrator | 2025-05-19 14:13:43.341343 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-05-19 14:13:43.388629 | orchestrator | 2025-05-19 14:13:43.389056 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-05-19 14:13:43.390187 | orchestrator | 2025-05-19 14:13:43.393079 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-05-19 14:13:43.393110 | orchestrator | 2025-05-19 14:13:43.393131 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:43.446262 | orchestrator | 2025-05-19 14:13:43.448221 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:43.458313 | orchestrator | 2025-05-19 14:13:43.458610 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:43.520313 | orchestrator | 2025-05-19 14:13:43.522090 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-05-19 14:13:43.572119 | orchestrator | 2025-05-19 14:13:43.572562 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:44.768113 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:44.768677 | orchestrator | 2025-05-19 14:13:44.770602 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:44.770615 | orchestrator | 2025-05-19 14:13:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:44.770621 | orchestrator | 2025-05-19 14:13:44 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:44.770828 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:44.898705 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:44.901884 | orchestrator | 2025-05-19 14:13:44.901953 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:44.901984 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:44.901997 | orchestrator | 2025-05-19 14:13:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:44.902010 | orchestrator | 2025-05-19 14:13:44 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:45.012043 | orchestrator | 2025-05-19 14:13:45 | INFO  | Handle file /netbox/resources/300-testbed-switch-3.yml 2025-05-19 14:13:45.145698 | orchestrator | 2025-05-19 14:13:45 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-05-19 14:13:45.404190 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:45.408970 | orchestrator | 2025-05-19 14:13:45.410340 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:45.410475 | orchestrator | 2025-05-19 14:13:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:45.410499 | orchestrator | 2025-05-19 14:13:45 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:45.410586 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:45.632295 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:45.632406 | orchestrator | 2025-05-19 14:13:45.635572 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:45.636479 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:45.636554 | orchestrator | 2025-05-19 14:13:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:45.636570 | orchestrator | 2025-05-19 14:13:45 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:45.723632 | orchestrator | 2025-05-19 14:13:45 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-05-19 14:13:45.897678 | orchestrator | 2025-05-19 14:13:45 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-05-19 14:13:46.258075 | orchestrator | 2025-05-19 14:13:46.260527 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-3.yml] ************* 2025-05-19 14:13:46.309196 | orchestrator | 2025-05-19 14:13:46.309380 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:46.523735 | orchestrator | 2025-05-19 14:13:46.523870 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-05-19 14:13:46.621508 | orchestrator | 2025-05-19 14:13:46.622130 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:46.874882 | orchestrator | 2025-05-19 14:13:46.874951 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-05-19 14:13:46.920510 | orchestrator | 2025-05-19 14:13:46.920563 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:47.018638 | orchestrator | 2025-05-19 14:13:47.018714 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-05-19 14:13:47.052837 | orchestrator | 2025-05-19 14:13:47.052915 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:47.723281 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:47.723370 | orchestrator | 2025-05-19 14:13:47.723716 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:47.723741 | orchestrator | 2025-05-19 14:13:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:47.723779 | orchestrator | 2025-05-19 14:13:47 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:47.723960 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:47.907424 | orchestrator | 2025-05-19 14:13:47 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-05-19 14:13:48.110891 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:48.111070 | orchestrator | 2025-05-19 14:13:48.111457 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:48.111574 | orchestrator | 2025-05-19 14:13:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:48.111784 | orchestrator | 2025-05-19 14:13:48 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:48.112461 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:48.346147 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:48.346304 | orchestrator | 2025-05-19 14:13:48.346691 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:48.346999 | orchestrator | 2025-05-19 14:13:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:48.347026 | orchestrator | 2025-05-19 14:13:48 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:48.347594 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:48.353727 | orchestrator | 2025-05-19 14:13:48 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-05-19 14:13:48.575558 | orchestrator | 2025-05-19 14:13:48 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-05-19 14:13:48.986340 | orchestrator | 2025-05-19 14:13:48.986544 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-05-19 14:13:49.028080 | orchestrator | 2025-05-19 14:13:49.029013 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:49.531423 | orchestrator | 2025-05-19 14:13:49.532474 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-05-19 14:13:49.621076 | orchestrator | 2025-05-19 14:13:49.622858 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:50.057106 | orchestrator | 2025-05-19 14:13:50.057286 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-05-19 14:13:50.095355 | orchestrator | 2025-05-19 14:13:50.095544 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:51.005563 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:51.006058 | orchestrator | 2025-05-19 14:13:51.006759 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:51.006805 | orchestrator | 2025-05-19 14:13:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:51.006814 | orchestrator | 2025-05-19 14:13:51 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:51.006975 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:51.209017 | orchestrator | 2025-05-19 14:13:51 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-05-19 14:13:51.268054 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:51.270253 | orchestrator | 2025-05-19 14:13:51.270291 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:51.270321 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:51.270358 | orchestrator | 2025-05-19 14:13:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:51.270371 | orchestrator | 2025-05-19 14:13:51 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:51.460373 | orchestrator | 2025-05-19 14:13:51 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-05-19 14:13:52.298474 | orchestrator | 2025-05-19 14:13:52.298582 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-05-19 14:13:52.340485 | orchestrator | 2025-05-19 14:13:52.340722 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:52.512216 | orchestrator | 2025-05-19 14:13:52.512359 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-05-19 14:13:52.561192 | orchestrator | 2025-05-19 14:13:52.561752 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:53.504031 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:53.504117 | orchestrator | 2025-05-19 14:13:53.504132 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:53.504176 | orchestrator | 2025-05-19 14:13:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:53.504191 | orchestrator | 2025-05-19 14:13:53 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:53.505329 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:53.564955 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“name” is not a valid value.\"]}"} 2025-05-19 14:13:53.569582 | orchestrator | 2025-05-19 14:13:53.569985 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:53.570949 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:53.570998 | orchestrator | 2025-05-19 14:13:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:53.571014 | orchestrator | 2025-05-19 14:13:53 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:53.714175 | orchestrator | 2025-05-19 14:13:53 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-05-19 14:13:53.756258 | orchestrator | 2025-05-19 14:13:53 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-05-19 14:13:53.773899 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:53.774112 | orchestrator | 2025-05-19 14:13:53.777202 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:53.777245 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:53.777278 | orchestrator | 2025-05-19 14:13:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:53.777293 | orchestrator | 2025-05-19 14:13:53 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:54.518174 | orchestrator | 2025-05-19 14:13:54.518720 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-05-19 14:13:54.556447 | orchestrator | 2025-05-19 14:13:54.556530 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:54.657254 | orchestrator | 2025-05-19 14:13:54.657516 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-05-19 14:13:54.698913 | orchestrator | 2025-05-19 14:13:54.700013 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-19 14:13:55.762198 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:55.762363 | orchestrator | 2025-05-19 14:13:55.763182 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:55.763458 | orchestrator | 2025-05-19 14:13:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:55.763651 | orchestrator | 2025-05-19 14:13:55 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:55.765485 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:55.827453 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:55.831288 | orchestrator | 2025-05-19 14:13:55.832124 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:55.832240 | orchestrator | 2025-05-19 14:13:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:55.832379 | orchestrator | 2025-05-19 14:13:55 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:55.835323 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:55.916735 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "{\"device_id\":[\"“device” is not a valid value.\"]}"} 2025-05-19 14:13:55.918537 | orchestrator | 2025-05-19 14:13:55.918651 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:13:55.920303 | orchestrator | 2025-05-19 14:13:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:13:55.920378 | orchestrator | 2025-05-19 14:13:55 | INFO  | Please wait and do not abort execution. 2025-05-19 14:13:55.920699 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-19 14:13:56.099392 | orchestrator | 2025-05-19 14:13:56 | INFO  | Runtime: 39.5408s 2025-05-19 14:13:56.433504 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-19 14:13:56.614706 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 14:13:56.614845 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614862 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614873 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Restarting (0) 47 seconds ago 2025-05-19 14:13:56.614885 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-05-19 14:13:56.614896 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614907 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614945 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614957 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-05-19 14:13:56.614983 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.614995 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-05-19 14:13:56.615006 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.615017 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.615028 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-05-19 14:13:56.615039 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.615050 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.615060 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.615071 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-05-19 14:13:56.621042 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-19 14:13:56.747652 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 14:13:56.747729 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 8 minutes (healthy) 2025-05-19 14:13:56.747742 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-19 14:13:56.747754 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-19 14:13:56.747766 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-19 14:13:56.753824 | orchestrator | ++ semver latest 7.0.0 2025-05-19 14:13:56.799011 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 14:13:56.799076 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 14:13:56.799092 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-19 14:13:56.802594 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-19 14:13:58.324278 | orchestrator | 2025-05-19 14:13:58 | INFO  | Task b1ecea3e-af6b-4a48-b19f-b8682a319d4e (resolvconf) was prepared for execution. 2025-05-19 14:13:58.324390 | orchestrator | 2025-05-19 14:13:58 | INFO  | It takes a moment until task b1ecea3e-af6b-4a48-b19f-b8682a319d4e (resolvconf) has been started and output is visible here. 2025-05-19 14:14:02.223055 | orchestrator | 2025-05-19 14:14:02.223190 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-19 14:14:02.230880 | orchestrator | 2025-05-19 14:14:02.231610 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 14:14:02.232014 | orchestrator | Monday 19 May 2025 14:14:02 +0000 (0:00:00.148) 0:00:00.148 ************ 2025-05-19 14:14:06.085187 | orchestrator | ok: [testbed-manager] 2025-05-19 14:14:06.085652 | orchestrator | 2025-05-19 14:14:06.087373 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 14:14:06.089411 | orchestrator | Monday 19 May 2025 14:14:06 +0000 (0:00:03.865) 0:00:04.014 ************ 2025-05-19 14:14:06.166733 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:06.166862 | orchestrator | 2025-05-19 14:14:06.166946 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 14:14:06.167830 | orchestrator | Monday 19 May 2025 14:14:06 +0000 (0:00:00.077) 0:00:04.091 ************ 2025-05-19 14:14:06.254944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-19 14:14:06.255744 | orchestrator | 2025-05-19 14:14:06.256661 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 14:14:06.257582 | orchestrator | Monday 19 May 2025 14:14:06 +0000 (0:00:00.092) 0:00:04.184 ************ 2025-05-19 14:14:06.328869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:14:06.330123 | orchestrator | 2025-05-19 14:14:06.331214 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 14:14:06.332084 | orchestrator | Monday 19 May 2025 14:14:06 +0000 (0:00:00.074) 0:00:04.258 ************ 2025-05-19 14:14:07.360281 | orchestrator | ok: [testbed-manager] 2025-05-19 14:14:07.360369 | orchestrator | 2025-05-19 14:14:07.361344 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 14:14:07.362014 | orchestrator | Monday 19 May 2025 14:14:07 +0000 (0:00:01.028) 0:00:05.286 ************ 2025-05-19 14:14:07.424895 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:07.427496 | orchestrator | 2025-05-19 14:14:07.428147 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 14:14:07.429271 | orchestrator | Monday 19 May 2025 14:14:07 +0000 (0:00:00.067) 0:00:05.354 ************ 2025-05-19 14:14:07.913563 | orchestrator | ok: [testbed-manager] 2025-05-19 14:14:07.913656 | orchestrator | 2025-05-19 14:14:07.914225 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 14:14:07.914655 | orchestrator | Monday 19 May 2025 14:14:07 +0000 (0:00:00.488) 0:00:05.843 ************ 2025-05-19 14:14:07.998712 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:07.998859 | orchestrator | 2025-05-19 14:14:07.998936 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 14:14:08.000546 | orchestrator | Monday 19 May 2025 14:14:07 +0000 (0:00:00.085) 0:00:05.928 ************ 2025-05-19 14:14:08.529264 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:08.529709 | orchestrator | 2025-05-19 14:14:08.530736 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 14:14:08.531084 | orchestrator | Monday 19 May 2025 14:14:08 +0000 (0:00:00.527) 0:00:06.456 ************ 2025-05-19 14:14:09.684560 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:09.685425 | orchestrator | 2025-05-19 14:14:09.685870 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 14:14:09.686866 | orchestrator | Monday 19 May 2025 14:14:09 +0000 (0:00:01.155) 0:00:07.612 ************ 2025-05-19 14:14:10.628342 | orchestrator | ok: [testbed-manager] 2025-05-19 14:14:10.628632 | orchestrator | 2025-05-19 14:14:10.629351 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 14:14:10.630911 | orchestrator | Monday 19 May 2025 14:14:10 +0000 (0:00:00.945) 0:00:08.557 ************ 2025-05-19 14:14:10.703405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-19 14:14:10.703547 | orchestrator | 2025-05-19 14:14:10.704077 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 14:14:10.704265 | orchestrator | Monday 19 May 2025 14:14:10 +0000 (0:00:00.073) 0:00:08.630 ************ 2025-05-19 14:14:11.861278 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:11.861550 | orchestrator | 2025-05-19 14:14:11.862566 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:14:11.862616 | orchestrator | 2025-05-19 14:14:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:14:11.862811 | orchestrator | 2025-05-19 14:14:11 | INFO  | Please wait and do not abort execution. 2025-05-19 14:14:11.863506 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:14:11.864232 | orchestrator | 2025-05-19 14:14:11.864840 | orchestrator | 2025-05-19 14:14:11.865524 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:14:11.866144 | orchestrator | Monday 19 May 2025 14:14:11 +0000 (0:00:01.159) 0:00:09.789 ************ 2025-05-19 14:14:11.866544 | orchestrator | =============================================================================== 2025-05-19 14:14:11.867253 | orchestrator | Gathering Facts --------------------------------------------------------- 3.87s 2025-05-19 14:14:11.867681 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-05-19 14:14:11.868193 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2025-05-19 14:14:11.868540 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.03s 2025-05-19 14:14:11.868971 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-05-19 14:14:11.869470 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-05-19 14:14:11.869643 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-05-19 14:14:11.870197 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-19 14:14:11.870490 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-05-19 14:14:11.870905 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-05-19 14:14:11.872024 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-19 14:14:11.872861 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-05-19 14:14:11.873566 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-05-19 14:14:12.327218 | orchestrator | + osism apply sshconfig 2025-05-19 14:14:14.034886 | orchestrator | 2025-05-19 14:14:14 | INFO  | Task 088406f0-ffcc-4ca7-80ee-1766f0a43d11 (sshconfig) was prepared for execution. 2025-05-19 14:14:14.034995 | orchestrator | 2025-05-19 14:14:14 | INFO  | It takes a moment until task 088406f0-ffcc-4ca7-80ee-1766f0a43d11 (sshconfig) has been started and output is visible here. 2025-05-19 14:14:17.415998 | orchestrator | 2025-05-19 14:14:17.416194 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-19 14:14:17.417994 | orchestrator | 2025-05-19 14:14:17.419150 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-19 14:14:17.419595 | orchestrator | Monday 19 May 2025 14:14:17 +0000 (0:00:00.123) 0:00:00.123 ************ 2025-05-19 14:14:17.901101 | orchestrator | ok: [testbed-manager] 2025-05-19 14:14:17.902297 | orchestrator | 2025-05-19 14:14:17.902492 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-19 14:14:17.903604 | orchestrator | Monday 19 May 2025 14:14:17 +0000 (0:00:00.487) 0:00:00.611 ************ 2025-05-19 14:14:18.354180 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:18.354290 | orchestrator | 2025-05-19 14:14:18.354950 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-19 14:14:18.355667 | orchestrator | Monday 19 May 2025 14:14:18 +0000 (0:00:00.451) 0:00:01.063 ************ 2025-05-19 14:14:23.214065 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-19 14:14:23.214191 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-19 14:14:23.214351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-19 14:14:23.215186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-19 14:14:23.215661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-19 14:14:23.217404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-19 14:14:23.218307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-19 14:14:23.218919 | orchestrator | 2025-05-19 14:14:23.219680 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-19 14:14:23.220170 | orchestrator | Monday 19 May 2025 14:14:23 +0000 (0:00:04.858) 0:00:05.922 ************ 2025-05-19 14:14:23.283045 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:23.283684 | orchestrator | 2025-05-19 14:14:23.284160 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-19 14:14:23.285179 | orchestrator | Monday 19 May 2025 14:14:23 +0000 (0:00:00.071) 0:00:05.993 ************ 2025-05-19 14:14:23.861168 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:23.861511 | orchestrator | 2025-05-19 14:14:23.864874 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:14:23.865674 | orchestrator | 2025-05-19 14:14:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:14:23.865809 | orchestrator | 2025-05-19 14:14:23 | INFO  | Please wait and do not abort execution. 2025-05-19 14:14:23.867238 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:14:23.868241 | orchestrator | 2025-05-19 14:14:23.869242 | orchestrator | 2025-05-19 14:14:23.869932 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:14:23.870783 | orchestrator | Monday 19 May 2025 14:14:23 +0000 (0:00:00.578) 0:00:06.571 ************ 2025-05-19 14:14:23.871153 | orchestrator | =============================================================================== 2025-05-19 14:14:23.872046 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.86s 2025-05-19 14:14:23.872787 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-05-19 14:14:23.873403 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.49s 2025-05-19 14:14:23.873954 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2025-05-19 14:14:23.874580 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-19 14:14:24.326334 | orchestrator | + osism apply known-hosts 2025-05-19 14:14:26.059371 | orchestrator | 2025-05-19 14:14:26 | INFO  | Task cae6e5a2-bd60-45bb-88d2-17a16dba43ce (known-hosts) was prepared for execution. 2025-05-19 14:14:26.059468 | orchestrator | 2025-05-19 14:14:26 | INFO  | It takes a moment until task cae6e5a2-bd60-45bb-88d2-17a16dba43ce (known-hosts) has been started and output is visible here. 2025-05-19 14:14:30.022512 | orchestrator | 2025-05-19 14:14:30.022617 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-19 14:14:30.022629 | orchestrator | 2025-05-19 14:14:30.023587 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-19 14:14:30.024216 | orchestrator | Monday 19 May 2025 14:14:30 +0000 (0:00:00.171) 0:00:00.171 ************ 2025-05-19 14:14:36.003876 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 14:14:36.003985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 14:14:36.004060 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 14:14:36.005287 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 14:14:36.006982 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 14:14:36.007722 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 14:14:36.008466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 14:14:36.009395 | orchestrator | 2025-05-19 14:14:36.010130 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-19 14:14:36.010837 | orchestrator | Monday 19 May 2025 14:14:35 +0000 (0:00:05.982) 0:00:06.154 ************ 2025-05-19 14:14:36.167586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 14:14:36.167737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 14:14:36.168991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 14:14:36.170423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 14:14:36.171083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 14:14:36.172657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 14:14:36.173214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 14:14:36.173832 | orchestrator | 2025-05-19 14:14:36.174766 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:36.175373 | orchestrator | Monday 19 May 2025 14:14:36 +0000 (0:00:00.168) 0:00:06.322 ************ 2025-05-19 14:14:37.314770 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGmn29NA0tYYoqEvteSu8ZW3NIr7jsgeNjxCcLvcOnL547v4D+JlKUl7lfOokAyedmm/FVYT6pTp5JtCRkexPM=) 2025-05-19 14:14:37.315087 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXDrgDi1igGJmsvGNtE7dGcmYDACEBpPzOrENXmJ6AwrKYjCNySFQGO6kL8W3IPS/zo7ks7KdGGuqRE1LMfca8flgrVUm6ZOQJvbGc/Ry/fuZy7XvIUn98efxBddT22XXqRrTzPO75ry/L0RLFGOqmnUoAq22pboF7Ny/WEgINNltE51QBFQ6qJyLvdQWe3w9oJVcqc1kvotDA6h5VD7Hwitd1jC7cG294AnCajQCBSsxt6kSBLLSUXttXNY/qjvSZwscKvbwNdjh8X/FDWWBvbPjsUskWZmPvM2q1HYJfG7YLRRE74k32QkZ+qOV7sIXiTiWkPeDE5Ccah6rwOWmFLautIWHK6YXvqJQ0pIzyG7begfkzeq6JJE8w4KIaLSaCUnQp1dhNqIWZS/NAl6ScRr5hxSbcpzxlEASLEBSFCXfhkRXo4t7b08PLwUiG0IWkEh1hISI6AcvTBlHrDZPPshnLiFQ1sNWgeBBEE7ntpWbEqiwRyXZZghx4M0Cr46s=) 2025-05-19 14:14:37.316158 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAniXva2YB5JGCZPFGKA11PWM0x6jRffXGGQcxmW9LwB) 2025-05-19 14:14:37.317083 | orchestrator | 2025-05-19 14:14:37.319826 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:37.320508 | orchestrator | Monday 19 May 2025 14:14:37 +0000 (0:00:01.144) 0:00:07.466 ************ 2025-05-19 14:14:38.379765 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9GPVb43CEybKNxZ0M7sqEhgzhz66hDuCU4gjkshIWGkPldOTWWQx1P/DgPZa4FUWfIxACjDK3kLN682qh9vab2rFBTn5I20H+rRfn/yhXzKQ8rh4rbUxCINqzkAYo/mEfghLjnvcHz8iUz0DeWEWR4LgTG36aSmMVJ1rOiz0+wjKqVCBy2yF6jQdCZ0UusUvPaHiT2UcfX2ADt/PHKW+QhTLFGvPFyhMFw1wp0bJc26FX78aH8A/aJQLJG8D4mz+GkpEfxjL589mzIrl344MaxcoF1rtGzECdmTL66rG6GrrvDh9GGsGx58pKXH7Ci2X3lLgiPSbbqZjOfEDcXxZ9sUDQIyatlelw9+2VgVDOhbBCyK7GeNcjRYA0/nWwShCq1ub6WfFvxhQaDLSUjK1zKpjwpC4pmUowcniDq2Ahfaik/Bx8iPJWTpjhSxjeGuZQs30acGejqjcrxf379SL3HaNUqV6t09ttnRIE+RinbCGUYjKNsl3SvRNjyMdZk/E=) 2025-05-19 14:14:38.379951 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqiWY/zsFlMmPfnUA+Br4vcWj8ZN0A56jmY/L8mNrWfy82yT9VXvlrTokrmSWhTi+VN/mTRzcICG5nQjMJC+6s=) 2025-05-19 14:14:38.380765 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG5hq15ktaNQmk8ird/m703TLWEos55CgtCYTSEjnEM) 2025-05-19 14:14:38.381612 | orchestrator | 2025-05-19 14:14:38.382965 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:38.383635 | orchestrator | Monday 19 May 2025 14:14:38 +0000 (0:00:01.065) 0:00:08.532 ************ 2025-05-19 14:14:39.421065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHW3foNlaGNg+ktHJbGgjMgNXN6b7T+Fb7LalFG7FRp) 2025-05-19 14:14:39.421790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3eV4I3d1IW/mGn3gtI9bKEFwIukFsnKbXgjFtwk3fZBkW58NjKw2CHffXIzNtqhtAVOVvFpke7DJTvcFyT3cORCRJrKUeKpUZa3lptIjoWAEFJL0fOLLcVit0MHJFgeTN1uINxRnv6RzSt5L4tkvDJAUH5mcbyjf0hepCX01R0QNyYOQ5hTks36ojji9OMDBbEBW+iq6DTMvj4LtxdNBBxOE/u3BeTMu/MM3ov5rWJ3CSC5JIOSQMvbKRZpqqXl0dcs2kyYvOIS/jTHFdlP0VmIlOd4CX8MYEPJvOsPip2S1wweGWUktU++FzxNERhGaq4ujwccIiqE4YozRfmSW3TIOwSpYx22NZUdqA9XgSjLk3DjLNnTjM4BT04uXZsVmsenumvv9prw9Ku5XWG+duIhTWZmZN7HGzFhxh7RuRnEmv7z517r7cCLoOmO2+w+rmX16MRJtgCecmSmyVfQwrIi5QXAPonPqFXxF+PNlBlvgW1SxPpzNja1/uY7AV5a8=) 2025-05-19 14:14:39.422599 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFvJGK4dmkYqvR3KIt/OaPLb89YmUOCFC8TfyTtHD+8Nu+YtLFBM+IARJTbAHOvlMJp9sqxyyNloTnAIlG4kpDA=) 2025-05-19 14:14:39.423600 | orchestrator | 2025-05-19 14:14:39.424335 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:39.425187 | orchestrator | Monday 19 May 2025 14:14:39 +0000 (0:00:01.041) 0:00:09.574 ************ 2025-05-19 14:14:40.499525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHFv82j/+0H3I9yV/Okc7WxQ07OwIKBr6pELhg06YHUJ) 2025-05-19 14:14:40.501168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf3BDeIXu0urOhgZwGVfOtRB7XjuEpHJpv4uYyBRZliGZhKI9FOuPXzHBPEXS6pLa/PwkGc/Z4rQ4asAc34duh3xEqmjZ1ZfT+lfakII6a+HGS17iMc+pnB7/6jeJnXB37DD4rguNm5bo6JqaBI4xgCRanp7RzwR20UOwtKclsaAMT4J63Yl49BQT84fzracFJqpOPBfENiJm/oiu8bzt1RJpe/Q9C9aG56jcPBDCYoswkibmTjPCFH5BDZ4tPd+x1OXc7T/6y7VMaje38/WfSa2t8b5UoaMnuIwGgQBQ3XEVENltv8nLteYNZF17CQdXgNvk0Agj2PUwgymre9doub3Fp5xh25FlU2nqag7IrQrW8hDmL50hb7kq+QKbJCbKuPNy2R3L5VY1CrdBqFjQmqRqEo2Pp4Rvm5v9jSL80hfgOZjwqSeouagT0AGqYwW3ik5npBSr+h1TMA2EF6EvEZi86bnaC8/U5+nNW7NF6jjRXlItvfQ1DfNBJ0/A/xNE=) 2025-05-19 14:14:40.501871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZlIZDuO8UGkLWUFqjkOheyg1SvlPPqyWHB65U8T6gV513xiqdTqbxFq1JFVCAgJL5pclvNNW6r6iFMrEIM+0Y=) 2025-05-19 14:14:40.503341 | orchestrator | 2025-05-19 14:14:40.504330 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:40.505367 | orchestrator | Monday 19 May 2025 14:14:40 +0000 (0:00:01.079) 0:00:10.653 ************ 2025-05-19 14:14:41.535150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIie5agCHQQ+0HEY2urgQTBOZrADHBZKaVlwURY+6jenG0MPbxIx60A8OSh1iyvWDOX+zfTkcFMShdHecDp4dWOAQuN1L5mioAPYsTGQMjrTViKJ+GBHePftN1R2TGQnwbRLdD4G3TaLVMB17KwBxFFQmuu1mqTZW7NlM48FiXgMdmnEVnKUhnmKUkAUM6nlY5GYg/2ESFtfwytHLGw+jTRhOFxnselgWZbp268QhEoME9ihmvujqxcc2Cq8/3JgDjDvx9rwBDjogs83IrNhbGDsjvO/TF/LV1kqMzyVYB2FFj+T8/cJ+CQMkpfwp3d7Ko+ApxG2DFz1rLsXP6wnOX/xZsgMKO5fxZyfBQW+9zUKHZWvuCwK3Lc9q1v403oH1H7Muey2NVIzssdGuHSa5r20KVEa/B3c/LeLRnj8USqq+PJppIxbxc+mCIi7qLHCYpbqAi1b6q8zRmlNDaGiVFSgmD/P79J0LHm0FkQdAWt1vkLSxwwrCcKqUB2DYiTAs=) 2025-05-19 14:14:41.536430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLNbPA2bXdlfWZadyWjx0uBjYBAa8JaI/2zmKGBYBNdPH5RLwDCNb7oDWObqeNLj4uLkJz+Sp+pvdxNBXvWBmDI=) 2025-05-19 14:14:41.537478 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICI0YjHfGhAbeOYJfp4uGDIQh0nbw/4okLDd4zHdhwdb) 2025-05-19 14:14:41.538205 | orchestrator | 2025-05-19 14:14:41.539228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:41.539923 | orchestrator | Monday 19 May 2025 14:14:41 +0000 (0:00:01.032) 0:00:11.686 ************ 2025-05-19 14:14:42.539046 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcPcGV8YWX9QGzHP5mMN6I6GoBC6IgfZr6Piq4j0t83lBkGm+HFUVgbwbEZ6lEK7uHwONnuokgTztLuBMKBBJ5bjuw+wKpGU25SQ2EFa7dv1Kbq96gwSY9ygkq1+H4+CEgCUnOjUB8VWA9jXKtrjfpQ/QYF9OPJcDBDR7/ry/uFIdT6I3T1KoYkhpfXdUFjrKZeXUelRm6L/MRf3vhXEA8dn3bYelQDdJbiqHXGZGYEErYJCrfrRDcyPa4s057C/Bm9kqODRhRbXTwrk8+zSvvwu/ulnEItEJeFJ0O0WoNt3ye7uQS3jW236qH1h+a4RArjqMIhZD4t9PIUWiZAhjvkDf11TkPe2gWP9Hg5GJ5c/pCoGw2MWrJg73zg8B2NP9BLpiVbN56vdQBJbPkkBOh6AUEyrDRoioC/NFk9M5aIlrW71PggGlp275OQI5Hr18E8aMRZBJsxxLWKKC0a2m/5l9TukWFbHWWpRoKv4jE78Bhrp2Gq886t9Op2rD68VU=) 2025-05-19 14:14:42.539933 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1GIavbzIeNUo5C/6NqYkzy/ShwIpzucQ3Vg5Sx00FV04W21QiB8MsGM5FZP7TcannmJIG9ZJpMMlCZwjAnISI=) 2025-05-19 14:14:42.540343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAf/319IFVdwqdxlkOiAj5NRyWHPzu1RW/8/3tjEGMm) 2025-05-19 14:14:42.541197 | orchestrator | 2025-05-19 14:14:42.541920 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:42.542579 | orchestrator | Monday 19 May 2025 14:14:42 +0000 (0:00:01.005) 0:00:12.692 ************ 2025-05-19 14:14:43.548079 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0XlHYKR2Stuplxw1s81ndniQBwhxytvw1W7jR6NGE0M55zXKvNPdE+tTESkKDTZG4UOTE2LID/aIvQJC6O+N8vzoPA7Hj1pVyLQ7/XNr5Uqte78mQE6AGA6KOHdcP4rjsds0yGNS4SrlKjJQzq3W/lOLw5eo0v68dqzMWTUrt2e2sqhmmNYq9G8OkxQVaTbIRGMMGas4t+UadBQvbvHTU16PQDoBjjxLytlbNV7X0ZPHd5NTTc0BamAWZceX0vFlNiq/W8HEgCk4K0fM1mYXV1iomVEFndi2f/xZQOVtr9XJQ1380MgGK4r1vj16JI5bGcPcRALRDHUnadfIQI7sjzmc6NBFZzxKRUrYyVAEkloOx/w4ormTQOj3PuhrtQq0dLBrwzE7zMa5F+KQNRa3W3jN3mxn2RJ4QyQqdNuFTEItLy3cgw1+s5A7jeWGrZdar3AD/qUTAzpSUQ7OWw0jmvtWqBYc6dRoW1M2Yv5NBMFKXJwWOk8KWMKbUebaJ0XU=) 2025-05-19 14:14:43.549007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwLXUcg+5pjRi5I1S8N0kFVCmo3PGIjbqXZ/WiKM8lCeVIrdqnPp4kgkm5fYde5/oPsbvLj7raDEX1hnhx/2bM=) 2025-05-19 14:14:43.549899 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJTMTSh7NcSfiwv52uoOmpN7AlPxly6ASXwIjSLar3q0) 2025-05-19 14:14:43.550720 | orchestrator | 2025-05-19 14:14:43.551474 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-19 14:14:43.552739 | orchestrator | Monday 19 May 2025 14:14:43 +0000 (0:00:01.009) 0:00:13.701 ************ 2025-05-19 14:14:48.760546 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 14:14:48.762111 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 14:14:48.763254 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 14:14:48.763985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 14:14:48.764573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 14:14:48.765513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 14:14:48.767343 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 14:14:48.767856 | orchestrator | 2025-05-19 14:14:48.768511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-19 14:14:48.769046 | orchestrator | Monday 19 May 2025 14:14:48 +0000 (0:00:05.210) 0:00:18.912 ************ 2025-05-19 14:14:48.942203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 14:14:48.942301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 14:14:48.942310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 14:14:48.942360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 14:14:48.943722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 14:14:48.944315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 14:14:48.945247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 14:14:48.945957 | orchestrator | 2025-05-19 14:14:48.946261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:48.946883 | orchestrator | Monday 19 May 2025 14:14:48 +0000 (0:00:00.181) 0:00:19.093 ************ 2025-05-19 14:14:50.035242 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXDrgDi1igGJmsvGNtE7dGcmYDACEBpPzOrENXmJ6AwrKYjCNySFQGO6kL8W3IPS/zo7ks7KdGGuqRE1LMfca8flgrVUm6ZOQJvbGc/Ry/fuZy7XvIUn98efxBddT22XXqRrTzPO75ry/L0RLFGOqmnUoAq22pboF7Ny/WEgINNltE51QBFQ6qJyLvdQWe3w9oJVcqc1kvotDA6h5VD7Hwitd1jC7cG294AnCajQCBSsxt6kSBLLSUXttXNY/qjvSZwscKvbwNdjh8X/FDWWBvbPjsUskWZmPvM2q1HYJfG7YLRRE74k32QkZ+qOV7sIXiTiWkPeDE5Ccah6rwOWmFLautIWHK6YXvqJQ0pIzyG7begfkzeq6JJE8w4KIaLSaCUnQp1dhNqIWZS/NAl6ScRr5hxSbcpzxlEASLEBSFCXfhkRXo4t7b08PLwUiG0IWkEh1hISI6AcvTBlHrDZPPshnLiFQ1sNWgeBBEE7ntpWbEqiwRyXZZghx4M0Cr46s=) 2025-05-19 14:14:50.035379 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGmn29NA0tYYoqEvteSu8ZW3NIr7jsgeNjxCcLvcOnL547v4D+JlKUl7lfOokAyedmm/FVYT6pTp5JtCRkexPM=) 2025-05-19 14:14:50.035499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAniXva2YB5JGCZPFGKA11PWM0x6jRffXGGQcxmW9LwB) 2025-05-19 14:14:50.035518 | orchestrator | 2025-05-19 14:14:50.036066 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:50.036501 | orchestrator | Monday 19 May 2025 14:14:50 +0000 (0:00:01.092) 0:00:20.186 ************ 2025-05-19 14:14:51.060139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9GPVb43CEybKNxZ0M7sqEhgzhz66hDuCU4gjkshIWGkPldOTWWQx1P/DgPZa4FUWfIxACjDK3kLN682qh9vab2rFBTn5I20H+rRfn/yhXzKQ8rh4rbUxCINqzkAYo/mEfghLjnvcHz8iUz0DeWEWR4LgTG36aSmMVJ1rOiz0+wjKqVCBy2yF6jQdCZ0UusUvPaHiT2UcfX2ADt/PHKW+QhTLFGvPFyhMFw1wp0bJc26FX78aH8A/aJQLJG8D4mz+GkpEfxjL589mzIrl344MaxcoF1rtGzECdmTL66rG6GrrvDh9GGsGx58pKXH7Ci2X3lLgiPSbbqZjOfEDcXxZ9sUDQIyatlelw9+2VgVDOhbBCyK7GeNcjRYA0/nWwShCq1ub6WfFvxhQaDLSUjK1zKpjwpC4pmUowcniDq2Ahfaik/Bx8iPJWTpjhSxjeGuZQs30acGejqjcrxf379SL3HaNUqV6t09ttnRIE+RinbCGUYjKNsl3SvRNjyMdZk/E=) 2025-05-19 14:14:51.060284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqiWY/zsFlMmPfnUA+Br4vcWj8ZN0A56jmY/L8mNrWfy82yT9VXvlrTokrmSWhTi+VN/mTRzcICG5nQjMJC+6s=) 2025-05-19 14:14:51.060354 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG5hq15ktaNQmk8ird/m703TLWEos55CgtCYTSEjnEM) 2025-05-19 14:14:51.061067 | orchestrator | 2025-05-19 14:14:51.061299 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:51.062325 | orchestrator | Monday 19 May 2025 14:14:51 +0000 (0:00:01.025) 0:00:21.212 ************ 2025-05-19 14:14:52.079572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHW3foNlaGNg+ktHJbGgjMgNXN6b7T+Fb7LalFG7FRp) 2025-05-19 14:14:52.081247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3eV4I3d1IW/mGn3gtI9bKEFwIukFsnKbXgjFtwk3fZBkW58NjKw2CHffXIzNtqhtAVOVvFpke7DJTvcFyT3cORCRJrKUeKpUZa3lptIjoWAEFJL0fOLLcVit0MHJFgeTN1uINxRnv6RzSt5L4tkvDJAUH5mcbyjf0hepCX01R0QNyYOQ5hTks36ojji9OMDBbEBW+iq6DTMvj4LtxdNBBxOE/u3BeTMu/MM3ov5rWJ3CSC5JIOSQMvbKRZpqqXl0dcs2kyYvOIS/jTHFdlP0VmIlOd4CX8MYEPJvOsPip2S1wweGWUktU++FzxNERhGaq4ujwccIiqE4YozRfmSW3TIOwSpYx22NZUdqA9XgSjLk3DjLNnTjM4BT04uXZsVmsenumvv9prw9Ku5XWG+duIhTWZmZN7HGzFhxh7RuRnEmv7z517r7cCLoOmO2+w+rmX16MRJtgCecmSmyVfQwrIi5QXAPonPqFXxF+PNlBlvgW1SxPpzNja1/uY7AV5a8=) 2025-05-19 14:14:52.081300 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFvJGK4dmkYqvR3KIt/OaPLb89YmUOCFC8TfyTtHD+8Nu+YtLFBM+IARJTbAHOvlMJp9sqxyyNloTnAIlG4kpDA=) 2025-05-19 14:14:52.082555 | orchestrator | 2025-05-19 14:14:52.084737 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:52.086676 | orchestrator | Monday 19 May 2025 14:14:52 +0000 (0:00:01.021) 0:00:22.233 ************ 2025-05-19 14:14:53.109799 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZlIZDuO8UGkLWUFqjkOheyg1SvlPPqyWHB65U8T6gV513xiqdTqbxFq1JFVCAgJL5pclvNNW6r6iFMrEIM+0Y=) 2025-05-19 14:14:53.110104 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf3BDeIXu0urOhgZwGVfOtRB7XjuEpHJpv4uYyBRZliGZhKI9FOuPXzHBPEXS6pLa/PwkGc/Z4rQ4asAc34duh3xEqmjZ1ZfT+lfakII6a+HGS17iMc+pnB7/6jeJnXB37DD4rguNm5bo6JqaBI4xgCRanp7RzwR20UOwtKclsaAMT4J63Yl49BQT84fzracFJqpOPBfENiJm/oiu8bzt1RJpe/Q9C9aG56jcPBDCYoswkibmTjPCFH5BDZ4tPd+x1OXc7T/6y7VMaje38/WfSa2t8b5UoaMnuIwGgQBQ3XEVENltv8nLteYNZF17CQdXgNvk0Agj2PUwgymre9doub3Fp5xh25FlU2nqag7IrQrW8hDmL50hb7kq+QKbJCbKuPNy2R3L5VY1CrdBqFjQmqRqEo2Pp4Rvm5v9jSL80hfgOZjwqSeouagT0AGqYwW3ik5npBSr+h1TMA2EF6EvEZi86bnaC8/U5+nNW7NF6jjRXlItvfQ1DfNBJ0/A/xNE=) 2025-05-19 14:14:53.111623 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHFv82j/+0H3I9yV/Okc7WxQ07OwIKBr6pELhg06YHUJ) 2025-05-19 14:14:53.112096 | orchestrator | 2025-05-19 14:14:53.112815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:53.113562 | orchestrator | Monday 19 May 2025 14:14:53 +0000 (0:00:01.029) 0:00:23.263 ************ 2025-05-19 14:14:54.151412 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICI0YjHfGhAbeOYJfp4uGDIQh0nbw/4okLDd4zHdhwdb) 2025-05-19 14:14:54.152402 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIie5agCHQQ+0HEY2urgQTBOZrADHBZKaVlwURY+6jenG0MPbxIx60A8OSh1iyvWDOX+zfTkcFMShdHecDp4dWOAQuN1L5mioAPYsTGQMjrTViKJ+GBHePftN1R2TGQnwbRLdD4G3TaLVMB17KwBxFFQmuu1mqTZW7NlM48FiXgMdmnEVnKUhnmKUkAUM6nlY5GYg/2ESFtfwytHLGw+jTRhOFxnselgWZbp268QhEoME9ihmvujqxcc2Cq8/3JgDjDvx9rwBDjogs83IrNhbGDsjvO/TF/LV1kqMzyVYB2FFj+T8/cJ+CQMkpfwp3d7Ko+ApxG2DFz1rLsXP6wnOX/xZsgMKO5fxZyfBQW+9zUKHZWvuCwK3Lc9q1v403oH1H7Muey2NVIzssdGuHSa5r20KVEa/B3c/LeLRnj8USqq+PJppIxbxc+mCIi7qLHCYpbqAi1b6q8zRmlNDaGiVFSgmD/P79J0LHm0FkQdAWt1vkLSxwwrCcKqUB2DYiTAs=) 2025-05-19 14:14:54.153551 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLNbPA2bXdlfWZadyWjx0uBjYBAa8JaI/2zmKGBYBNdPH5RLwDCNb7oDWObqeNLj4uLkJz+Sp+pvdxNBXvWBmDI=) 2025-05-19 14:14:54.155199 | orchestrator | 2025-05-19 14:14:54.155631 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:54.156399 | orchestrator | Monday 19 May 2025 14:14:54 +0000 (0:00:01.042) 0:00:24.305 ************ 2025-05-19 14:14:55.202796 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAf/319IFVdwqdxlkOiAj5NRyWHPzu1RW/8/3tjEGMm) 2025-05-19 14:14:55.203812 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcPcGV8YWX9QGzHP5mMN6I6GoBC6IgfZr6Piq4j0t83lBkGm+HFUVgbwbEZ6lEK7uHwONnuokgTztLuBMKBBJ5bjuw+wKpGU25SQ2EFa7dv1Kbq96gwSY9ygkq1+H4+CEgCUnOjUB8VWA9jXKtrjfpQ/QYF9OPJcDBDR7/ry/uFIdT6I3T1KoYkhpfXdUFjrKZeXUelRm6L/MRf3vhXEA8dn3bYelQDdJbiqHXGZGYEErYJCrfrRDcyPa4s057C/Bm9kqODRhRbXTwrk8+zSvvwu/ulnEItEJeFJ0O0WoNt3ye7uQS3jW236qH1h+a4RArjqMIhZD4t9PIUWiZAhjvkDf11TkPe2gWP9Hg5GJ5c/pCoGw2MWrJg73zg8B2NP9BLpiVbN56vdQBJbPkkBOh6AUEyrDRoioC/NFk9M5aIlrW71PggGlp275OQI5Hr18E8aMRZBJsxxLWKKC0a2m/5l9TukWFbHWWpRoKv4jE78Bhrp2Gq886t9Op2rD68VU=) 2025-05-19 14:14:55.204309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1GIavbzIeNUo5C/6NqYkzy/ShwIpzucQ3Vg5Sx00FV04W21QiB8MsGM5FZP7TcannmJIG9ZJpMMlCZwjAnISI=) 2025-05-19 14:14:55.205208 | orchestrator | 2025-05-19 14:14:55.206340 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 14:14:55.207028 | orchestrator | Monday 19 May 2025 14:14:55 +0000 (0:00:01.050) 0:00:25.355 ************ 2025-05-19 14:14:56.252843 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0XlHYKR2Stuplxw1s81ndniQBwhxytvw1W7jR6NGE0M55zXKvNPdE+tTESkKDTZG4UOTE2LID/aIvQJC6O+N8vzoPA7Hj1pVyLQ7/XNr5Uqte78mQE6AGA6KOHdcP4rjsds0yGNS4SrlKjJQzq3W/lOLw5eo0v68dqzMWTUrt2e2sqhmmNYq9G8OkxQVaTbIRGMMGas4t+UadBQvbvHTU16PQDoBjjxLytlbNV7X0ZPHd5NTTc0BamAWZceX0vFlNiq/W8HEgCk4K0fM1mYXV1iomVEFndi2f/xZQOVtr9XJQ1380MgGK4r1vj16JI5bGcPcRALRDHUnadfIQI7sjzmc6NBFZzxKRUrYyVAEkloOx/w4ormTQOj3PuhrtQq0dLBrwzE7zMa5F+KQNRa3W3jN3mxn2RJ4QyQqdNuFTEItLy3cgw1+s5A7jeWGrZdar3AD/qUTAzpSUQ7OWw0jmvtWqBYc6dRoW1M2Yv5NBMFKXJwWOk8KWMKbUebaJ0XU=) 2025-05-19 14:14:56.253181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwLXUcg+5pjRi5I1S8N0kFVCmo3PGIjbqXZ/WiKM8lCeVIrdqnPp4kgkm5fYde5/oPsbvLj7raDEX1hnhx/2bM=) 2025-05-19 14:14:56.253880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJTMTSh7NcSfiwv52uoOmpN7AlPxly6ASXwIjSLar3q0) 2025-05-19 14:14:56.254533 | orchestrator | 2025-05-19 14:14:56.255105 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-19 14:14:56.255752 | orchestrator | Monday 19 May 2025 14:14:56 +0000 (0:00:01.050) 0:00:26.406 ************ 2025-05-19 14:14:56.552939 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 14:14:56.553130 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 14:14:56.553697 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 14:14:56.554610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 14:14:56.557149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 14:14:56.557194 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 14:14:56.558373 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 14:14:56.558584 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:56.559601 | orchestrator | 2025-05-19 14:14:56.560425 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-19 14:14:56.561200 | orchestrator | Monday 19 May 2025 14:14:56 +0000 (0:00:00.302) 0:00:26.708 ************ 2025-05-19 14:14:56.690129 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:56.690623 | orchestrator | 2025-05-19 14:14:56.691604 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-19 14:14:56.692633 | orchestrator | Monday 19 May 2025 14:14:56 +0000 (0:00:00.136) 0:00:26.844 ************ 2025-05-19 14:14:56.748115 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:14:56.748891 | orchestrator | 2025-05-19 14:14:56.750500 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-19 14:14:56.751498 | orchestrator | Monday 19 May 2025 14:14:56 +0000 (0:00:00.058) 0:00:26.903 ************ 2025-05-19 14:14:57.262632 | orchestrator | changed: [testbed-manager] 2025-05-19 14:14:57.263527 | orchestrator | 2025-05-19 14:14:57.266345 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:14:57.267146 | orchestrator | 2025-05-19 14:14:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:14:57.267185 | orchestrator | 2025-05-19 14:14:57 | INFO  | Please wait and do not abort execution. 2025-05-19 14:14:57.267704 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:14:57.268290 | orchestrator | 2025-05-19 14:14:57.270731 | orchestrator | 2025-05-19 14:14:57.271207 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:14:57.274192 | orchestrator | Monday 19 May 2025 14:14:57 +0000 (0:00:00.510) 0:00:27.413 ************ 2025-05-19 14:14:57.274529 | orchestrator | =============================================================================== 2025-05-19 14:14:57.275523 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.98s 2025-05-19 14:14:57.276697 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2025-05-19 14:14:57.277538 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-19 14:14:57.278057 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-19 14:14:57.278288 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-19 14:14:57.278728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-19 14:14:57.279069 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-19 14:14:57.279331 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-19 14:14:57.279790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 14:14:57.280166 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 14:14:57.280565 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-19 14:14:57.282236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-19 14:14:57.282583 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-19 14:14:57.282964 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-19 14:14:57.283233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-19 14:14:57.283556 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-19 14:14:57.283871 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-05-19 14:14:57.284167 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.30s 2025-05-19 14:14:57.284483 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-19 14:14:57.284936 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-19 14:14:57.700398 | orchestrator | + osism apply squid 2025-05-19 14:14:59.357706 | orchestrator | 2025-05-19 14:14:59 | INFO  | Task e1803d0e-3b1e-4e31-9c52-e9d61b0583f3 (squid) was prepared for execution. 2025-05-19 14:14:59.357811 | orchestrator | 2025-05-19 14:14:59 | INFO  | It takes a moment until task e1803d0e-3b1e-4e31-9c52-e9d61b0583f3 (squid) has been started and output is visible here. 2025-05-19 14:15:02.777135 | orchestrator | 2025-05-19 14:15:02.777300 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-19 14:15:02.778356 | orchestrator | 2025-05-19 14:15:02.778805 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-19 14:15:02.779250 | orchestrator | Monday 19 May 2025 14:15:02 +0000 (0:00:00.123) 0:00:00.123 ************ 2025-05-19 14:15:02.845360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:15:02.845502 | orchestrator | 2025-05-19 14:15:02.846058 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-19 14:15:02.846587 | orchestrator | Monday 19 May 2025 14:15:02 +0000 (0:00:00.071) 0:00:00.194 ************ 2025-05-19 14:15:03.938906 | orchestrator | ok: [testbed-manager] 2025-05-19 14:15:03.939002 | orchestrator | 2025-05-19 14:15:03.939816 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-19 14:15:03.940514 | orchestrator | Monday 19 May 2025 14:15:03 +0000 (0:00:01.092) 0:00:01.287 ************ 2025-05-19 14:15:04.959443 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-19 14:15:04.959712 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-19 14:15:04.960068 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-19 14:15:04.960136 | orchestrator | 2025-05-19 14:15:04.960477 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-19 14:15:04.960860 | orchestrator | Monday 19 May 2025 14:15:04 +0000 (0:00:01.018) 0:00:02.305 ************ 2025-05-19 14:15:05.887122 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-19 14:15:05.887264 | orchestrator | 2025-05-19 14:15:05.889939 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-19 14:15:05.889979 | orchestrator | Monday 19 May 2025 14:15:05 +0000 (0:00:00.930) 0:00:03.235 ************ 2025-05-19 14:15:06.185946 | orchestrator | ok: [testbed-manager] 2025-05-19 14:15:06.186151 | orchestrator | 2025-05-19 14:15:06.186798 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-19 14:15:06.187032 | orchestrator | Monday 19 May 2025 14:15:06 +0000 (0:00:00.300) 0:00:03.536 ************ 2025-05-19 14:15:07.014169 | orchestrator | changed: [testbed-manager] 2025-05-19 14:15:07.014336 | orchestrator | 2025-05-19 14:15:07.015257 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-19 14:15:07.016308 | orchestrator | Monday 19 May 2025 14:15:07 +0000 (0:00:00.824) 0:00:04.360 ************ 2025-05-19 14:15:38.404839 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-19 14:15:38.404970 | orchestrator | ok: [testbed-manager] 2025-05-19 14:15:38.404989 | orchestrator | 2025-05-19 14:15:38.405004 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-19 14:15:38.405524 | orchestrator | Monday 19 May 2025 14:15:38 +0000 (0:00:31.389) 0:00:35.749 ************ 2025-05-19 14:15:50.193629 | orchestrator | changed: [testbed-manager] 2025-05-19 14:15:50.193881 | orchestrator | 2025-05-19 14:15:50.194133 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-19 14:15:50.195433 | orchestrator | Monday 19 May 2025 14:15:50 +0000 (0:00:11.787) 0:00:47.537 ************ 2025-05-19 14:16:50.284068 | orchestrator | Pausing for 60 seconds 2025-05-19 14:16:50.284234 | orchestrator | changed: [testbed-manager] 2025-05-19 14:16:50.284323 | orchestrator | 2025-05-19 14:16:50.286718 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-19 14:16:50.287303 | orchestrator | Monday 19 May 2025 14:16:50 +0000 (0:01:00.092) 0:01:47.630 ************ 2025-05-19 14:16:50.348638 | orchestrator | ok: [testbed-manager] 2025-05-19 14:16:50.348913 | orchestrator | 2025-05-19 14:16:50.349592 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-19 14:16:50.350353 | orchestrator | Monday 19 May 2025 14:16:50 +0000 (0:00:00.067) 0:01:47.698 ************ 2025-05-19 14:16:50.936970 | orchestrator | changed: [testbed-manager] 2025-05-19 14:16:50.937073 | orchestrator | 2025-05-19 14:16:50.937930 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:16:50.938127 | orchestrator | 2025-05-19 14:16:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:16:50.938191 | orchestrator | 2025-05-19 14:16:50 | INFO  | Please wait and do not abort execution. 2025-05-19 14:16:50.938612 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:16:50.938896 | orchestrator | 2025-05-19 14:16:50.939102 | orchestrator | 2025-05-19 14:16:50.939337 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:16:50.939625 | orchestrator | Monday 19 May 2025 14:16:50 +0000 (0:00:00.588) 0:01:48.286 ************ 2025-05-19 14:16:50.939962 | orchestrator | =============================================================================== 2025-05-19 14:16:50.940200 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-05-19 14:16:50.940408 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.39s 2025-05-19 14:16:50.941158 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.79s 2025-05-19 14:16:50.941251 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.09s 2025-05-19 14:16:50.942488 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.02s 2025-05-19 14:16:50.942958 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2025-05-19 14:16:50.943430 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.82s 2025-05-19 14:16:50.943967 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-05-19 14:16:50.944454 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.30s 2025-05-19 14:16:50.946292 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-05-19 14:16:50.946719 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-19 14:16:51.416453 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 14:16:51.416816 | orchestrator | ++ semver latest 9.0.0 2025-05-19 14:16:51.460722 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-19 14:16:51.460798 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 14:16:51.461282 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-19 14:16:53.163198 | orchestrator | 2025-05-19 14:16:53 | INFO  | Task 53d451fe-7eb5-4fa7-a41d-8027a97c9ee2 (operator) was prepared for execution. 2025-05-19 14:16:53.163302 | orchestrator | 2025-05-19 14:16:53 | INFO  | It takes a moment until task 53d451fe-7eb5-4fa7-a41d-8027a97c9ee2 (operator) has been started and output is visible here. 2025-05-19 14:16:57.070958 | orchestrator | 2025-05-19 14:16:57.071096 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-19 14:16:57.071792 | orchestrator | 2025-05-19 14:16:57.072778 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 14:16:57.074310 | orchestrator | Monday 19 May 2025 14:16:57 +0000 (0:00:00.146) 0:00:00.146 ************ 2025-05-19 14:17:00.214905 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:00.215087 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:00.215104 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:00.215185 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:00.215956 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:00.216145 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:00.216843 | orchestrator | 2025-05-19 14:17:00.218216 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-19 14:17:00.218250 | orchestrator | Monday 19 May 2025 14:17:00 +0000 (0:00:03.144) 0:00:03.291 ************ 2025-05-19 14:17:00.991013 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:00.991528 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:00.991808 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:00.994613 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:00.995693 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:00.997528 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:00.997865 | orchestrator | 2025-05-19 14:17:00.998672 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-19 14:17:00.999533 | orchestrator | 2025-05-19 14:17:01.000950 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 14:17:01.001088 | orchestrator | Monday 19 May 2025 14:17:00 +0000 (0:00:00.778) 0:00:04.069 ************ 2025-05-19 14:17:01.060533 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:01.086261 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:01.106325 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:01.154577 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:01.154750 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:01.155359 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:01.156178 | orchestrator | 2025-05-19 14:17:01.157330 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 14:17:01.157562 | orchestrator | Monday 19 May 2025 14:17:01 +0000 (0:00:00.162) 0:00:04.232 ************ 2025-05-19 14:17:01.216612 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:01.242367 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:01.263895 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:01.313429 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:01.314206 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:01.316065 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:01.316931 | orchestrator | 2025-05-19 14:17:01.318253 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 14:17:01.319136 | orchestrator | Monday 19 May 2025 14:17:01 +0000 (0:00:00.158) 0:00:04.391 ************ 2025-05-19 14:17:01.891784 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:01.891891 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:01.891973 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:01.892940 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:01.893397 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:01.895259 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:01.896139 | orchestrator | 2025-05-19 14:17:01.897026 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 14:17:01.898123 | orchestrator | Monday 19 May 2025 14:17:01 +0000 (0:00:00.578) 0:00:04.969 ************ 2025-05-19 14:17:02.693930 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:02.694087 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:02.695282 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:02.696384 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:02.697411 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:02.698548 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:02.699337 | orchestrator | 2025-05-19 14:17:02.699839 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 14:17:02.700527 | orchestrator | Monday 19 May 2025 14:17:02 +0000 (0:00:00.800) 0:00:05.769 ************ 2025-05-19 14:17:03.841095 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-19 14:17:03.841985 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-19 14:17:03.842946 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-19 14:17:03.843925 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-19 14:17:03.845508 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-19 14:17:03.846669 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-19 14:17:03.847129 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-19 14:17:03.847761 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-19 14:17:03.848413 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-19 14:17:03.848803 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-19 14:17:03.849420 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-19 14:17:03.849878 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-19 14:17:03.850412 | orchestrator | 2025-05-19 14:17:03.852153 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 14:17:03.852588 | orchestrator | Monday 19 May 2025 14:17:03 +0000 (0:00:01.146) 0:00:06.915 ************ 2025-05-19 14:17:05.039453 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:05.039535 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:05.039998 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:05.042687 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:05.042702 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:05.042707 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:05.042711 | orchestrator | 2025-05-19 14:17:05.043022 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 14:17:05.044008 | orchestrator | Monday 19 May 2025 14:17:05 +0000 (0:00:01.199) 0:00:08.115 ************ 2025-05-19 14:17:06.216135 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-19 14:17:06.216889 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-19 14:17:06.217949 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-19 14:17:06.305023 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.305220 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.306283 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.307220 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.308104 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.308915 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 14:17:06.309983 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.310932 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.311161 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.312625 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.312646 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.313032 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-19 14:17:06.313250 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.316710 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.316788 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.317652 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.317689 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.317803 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-19 14:17:06.317923 | orchestrator | 2025-05-19 14:17:06.318268 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 14:17:06.318665 | orchestrator | Monday 19 May 2025 14:17:06 +0000 (0:00:01.266) 0:00:09.381 ************ 2025-05-19 14:17:06.874293 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:06.874920 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:06.874986 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:06.875000 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:06.875128 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:06.875713 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:06.875903 | orchestrator | 2025-05-19 14:17:06.876282 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 14:17:06.877706 | orchestrator | Monday 19 May 2025 14:17:06 +0000 (0:00:00.569) 0:00:09.951 ************ 2025-05-19 14:17:06.953349 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:17:06.974899 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:17:07.004542 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:17:07.061125 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:07.063029 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:07.063307 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:07.063697 | orchestrator | 2025-05-19 14:17:07.063963 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 14:17:07.064270 | orchestrator | Monday 19 May 2025 14:17:07 +0000 (0:00:00.188) 0:00:10.140 ************ 2025-05-19 14:17:07.757389 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:17:07.757623 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:07.758111 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:17:07.758633 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:07.760820 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:17:07.761426 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 14:17:07.761924 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:07.762627 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:07.763418 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:17:07.763936 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:07.764475 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 14:17:07.764992 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:07.765492 | orchestrator | 2025-05-19 14:17:07.766143 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 14:17:07.766506 | orchestrator | Monday 19 May 2025 14:17:07 +0000 (0:00:00.695) 0:00:10.835 ************ 2025-05-19 14:17:07.807748 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:17:07.854764 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:17:07.884811 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:17:07.915579 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:07.915902 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:07.917138 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:07.918118 | orchestrator | 2025-05-19 14:17:07.923249 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 14:17:07.924169 | orchestrator | Monday 19 May 2025 14:17:07 +0000 (0:00:00.159) 0:00:10.994 ************ 2025-05-19 14:17:07.970068 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:17:08.002654 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:17:08.026165 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:17:08.048922 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:08.085995 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:08.086883 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:08.087458 | orchestrator | 2025-05-19 14:17:08.091579 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 14:17:08.092049 | orchestrator | Monday 19 May 2025 14:17:08 +0000 (0:00:00.170) 0:00:11.165 ************ 2025-05-19 14:17:08.133732 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:17:08.166309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:17:08.194944 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:17:08.244761 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:08.244937 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:08.248216 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:08.248319 | orchestrator | 2025-05-19 14:17:08.249090 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 14:17:08.249230 | orchestrator | Monday 19 May 2025 14:17:08 +0000 (0:00:00.158) 0:00:11.323 ************ 2025-05-19 14:17:08.872618 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:08.873132 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:08.874393 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:08.874800 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:08.875570 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:08.876337 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:08.877008 | orchestrator | 2025-05-19 14:17:08.877616 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 14:17:08.878231 | orchestrator | Monday 19 May 2025 14:17:08 +0000 (0:00:00.625) 0:00:11.949 ************ 2025-05-19 14:17:08.942901 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:17:08.985657 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:17:09.076248 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:17:09.078148 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:09.079672 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:09.081090 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:09.082521 | orchestrator | 2025-05-19 14:17:09.083870 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:17:09.084646 | orchestrator | 2025-05-19 14:17:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:17:09.085310 | orchestrator | 2025-05-19 14:17:09 | INFO  | Please wait and do not abort execution. 2025-05-19 14:17:09.086533 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.088314 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.088656 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.089930 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.091049 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.092141 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:17:09.093162 | orchestrator | 2025-05-19 14:17:09.093812 | orchestrator | 2025-05-19 14:17:09.095296 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:17:09.096097 | orchestrator | Monday 19 May 2025 14:17:09 +0000 (0:00:00.205) 0:00:12.155 ************ 2025-05-19 14:17:09.097065 | orchestrator | =============================================================================== 2025-05-19 14:17:09.098079 | orchestrator | Gathering Facts --------------------------------------------------------- 3.14s 2025-05-19 14:17:09.098756 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-05-19 14:17:09.099644 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2025-05-19 14:17:09.100556 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-05-19 14:17:09.101420 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-05-19 14:17:09.102884 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-05-19 14:17:09.103144 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-05-19 14:17:09.104459 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-05-19 14:17:09.105309 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-05-19 14:17:09.106239 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-05-19 14:17:09.106803 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-05-19 14:17:09.107256 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-05-19 14:17:09.107955 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-05-19 14:17:09.108568 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-05-19 14:17:09.109102 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-05-19 14:17:09.109509 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-19 14:17:09.110083 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-19 14:17:09.519838 | orchestrator | + osism apply --environment custom facts 2025-05-19 14:17:11.154803 | orchestrator | 2025-05-19 14:17:11 | INFO  | Trying to run play facts in environment custom 2025-05-19 14:17:11.212633 | orchestrator | 2025-05-19 14:17:11 | INFO  | Task e500bf59-6046-45c4-8f5a-87b3d8ec6ab0 (facts) was prepared for execution. 2025-05-19 14:17:11.212755 | orchestrator | 2025-05-19 14:17:11 | INFO  | It takes a moment until task e500bf59-6046-45c4-8f5a-87b3d8ec6ab0 (facts) has been started and output is visible here. 2025-05-19 14:17:15.082664 | orchestrator | 2025-05-19 14:17:15.082735 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-19 14:17:15.082784 | orchestrator | 2025-05-19 14:17:15.083034 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 14:17:15.084190 | orchestrator | Monday 19 May 2025 14:17:15 +0000 (0:00:00.068) 0:00:00.068 ************ 2025-05-19 14:17:16.500492 | orchestrator | ok: [testbed-manager] 2025-05-19 14:17:16.500686 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:16.502292 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:16.503530 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:16.504208 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:16.505041 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:16.505970 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:16.506635 | orchestrator | 2025-05-19 14:17:16.507528 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-19 14:17:16.508247 | orchestrator | Monday 19 May 2025 14:17:16 +0000 (0:00:01.419) 0:00:01.487 ************ 2025-05-19 14:17:17.655380 | orchestrator | ok: [testbed-manager] 2025-05-19 14:17:17.655523 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:17:17.655904 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:17.658103 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:17:17.659016 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:17.659365 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:17.660407 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:17:17.661117 | orchestrator | 2025-05-19 14:17:17.661868 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-19 14:17:17.662186 | orchestrator | 2025-05-19 14:17:17.662783 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 14:17:17.663580 | orchestrator | Monday 19 May 2025 14:17:17 +0000 (0:00:01.156) 0:00:02.644 ************ 2025-05-19 14:17:17.768734 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:17.769039 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:17.769330 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:17.770261 | orchestrator | 2025-05-19 14:17:17.771104 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 14:17:17.771542 | orchestrator | Monday 19 May 2025 14:17:17 +0000 (0:00:00.115) 0:00:02.759 ************ 2025-05-19 14:17:17.956248 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:17.956956 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:17.958338 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:17.958782 | orchestrator | 2025-05-19 14:17:17.959449 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 14:17:17.959757 | orchestrator | Monday 19 May 2025 14:17:17 +0000 (0:00:00.187) 0:00:02.946 ************ 2025-05-19 14:17:18.122060 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:18.123201 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:18.124642 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:18.126133 | orchestrator | 2025-05-19 14:17:18.127365 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 14:17:18.128171 | orchestrator | Monday 19 May 2025 14:17:18 +0000 (0:00:00.165) 0:00:03.112 ************ 2025-05-19 14:17:18.256598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:17:18.257309 | orchestrator | 2025-05-19 14:17:18.258463 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 14:17:18.259580 | orchestrator | Monday 19 May 2025 14:17:18 +0000 (0:00:00.133) 0:00:03.246 ************ 2025-05-19 14:17:18.726170 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:18.726260 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:18.726813 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:18.726836 | orchestrator | 2025-05-19 14:17:18.726969 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 14:17:18.727351 | orchestrator | Monday 19 May 2025 14:17:18 +0000 (0:00:00.469) 0:00:03.716 ************ 2025-05-19 14:17:18.832465 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:18.832691 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:18.832713 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:18.833618 | orchestrator | 2025-05-19 14:17:18.834452 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 14:17:18.834967 | orchestrator | Monday 19 May 2025 14:17:18 +0000 (0:00:00.105) 0:00:03.822 ************ 2025-05-19 14:17:19.871510 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:19.871657 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:19.871747 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:19.872116 | orchestrator | 2025-05-19 14:17:19.872352 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 14:17:19.872844 | orchestrator | Monday 19 May 2025 14:17:19 +0000 (0:00:01.036) 0:00:04.859 ************ 2025-05-19 14:17:20.325306 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:20.325454 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:20.326528 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:20.327676 | orchestrator | 2025-05-19 14:17:20.328749 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 14:17:20.330783 | orchestrator | Monday 19 May 2025 14:17:20 +0000 (0:00:00.453) 0:00:05.313 ************ 2025-05-19 14:17:21.353187 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:21.353356 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:21.353457 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:21.354098 | orchestrator | 2025-05-19 14:17:21.355341 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 14:17:21.355851 | orchestrator | Monday 19 May 2025 14:17:21 +0000 (0:00:01.027) 0:00:06.340 ************ 2025-05-19 14:17:34.455118 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:34.455245 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:34.455260 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:34.455272 | orchestrator | 2025-05-19 14:17:34.455284 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-19 14:17:34.455297 | orchestrator | Monday 19 May 2025 14:17:34 +0000 (0:00:13.097) 0:00:19.437 ************ 2025-05-19 14:17:34.513052 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:17:34.556478 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:17:34.556584 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:17:34.557784 | orchestrator | 2025-05-19 14:17:34.558513 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-19 14:17:34.559581 | orchestrator | Monday 19 May 2025 14:17:34 +0000 (0:00:00.108) 0:00:19.546 ************ 2025-05-19 14:17:41.603244 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:17:41.605774 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:17:41.605818 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:17:41.605825 | orchestrator | 2025-05-19 14:17:41.605833 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 14:17:41.606847 | orchestrator | Monday 19 May 2025 14:17:41 +0000 (0:00:07.045) 0:00:26.591 ************ 2025-05-19 14:17:42.058786 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:42.059890 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:42.060441 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:42.061155 | orchestrator | 2025-05-19 14:17:42.062463 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 14:17:42.062613 | orchestrator | Monday 19 May 2025 14:17:42 +0000 (0:00:00.456) 0:00:27.048 ************ 2025-05-19 14:17:45.457998 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-19 14:17:45.458111 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-19 14:17:45.459191 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-19 14:17:45.460554 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-19 14:17:45.462622 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-19 14:17:45.462658 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-19 14:17:45.463369 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-19 14:17:45.464909 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-19 14:17:45.464924 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-19 14:17:45.468038 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-19 14:17:45.468088 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-19 14:17:45.468098 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-19 14:17:45.468122 | orchestrator | 2025-05-19 14:17:45.468132 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 14:17:45.468141 | orchestrator | Monday 19 May 2025 14:17:45 +0000 (0:00:03.397) 0:00:30.445 ************ 2025-05-19 14:17:46.622709 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:46.624211 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:46.624998 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:46.627137 | orchestrator | 2025-05-19 14:17:46.627220 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 14:17:46.628530 | orchestrator | 2025-05-19 14:17:46.629866 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:17:46.630895 | orchestrator | Monday 19 May 2025 14:17:46 +0000 (0:00:01.162) 0:00:31.608 ************ 2025-05-19 14:17:50.394263 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:50.396000 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:50.396490 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:50.398811 | orchestrator | ok: [testbed-manager] 2025-05-19 14:17:50.402678 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:50.403237 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:50.403346 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:50.403826 | orchestrator | 2025-05-19 14:17:50.404150 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:17:50.404888 | orchestrator | 2025-05-19 14:17:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:17:50.404908 | orchestrator | 2025-05-19 14:17:50 | INFO  | Please wait and do not abort execution. 2025-05-19 14:17:50.405782 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:17:50.405884 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:17:50.406182 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:17:50.406598 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:17:50.406895 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:17:50.407904 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:17:50.407937 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:17:50.408149 | orchestrator | 2025-05-19 14:17:50.408474 | orchestrator | 2025-05-19 14:17:50.408847 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:17:50.409329 | orchestrator | Monday 19 May 2025 14:17:50 +0000 (0:00:03.774) 0:00:35.382 ************ 2025-05-19 14:17:50.409630 | orchestrator | =============================================================================== 2025-05-19 14:17:50.411177 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.10s 2025-05-19 14:17:50.412454 | orchestrator | Install required packages (Debian) -------------------------------------- 7.05s 2025-05-19 14:17:50.413007 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2025-05-19 14:17:50.413548 | orchestrator | Copy fact files --------------------------------------------------------- 3.40s 2025-05-19 14:17:50.414094 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2025-05-19 14:17:50.414556 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.16s 2025-05-19 14:17:50.415039 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2025-05-19 14:17:50.415453 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-05-19 14:17:50.416644 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-05-19 14:17:50.417211 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-05-19 14:17:50.417478 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-05-19 14:17:50.417572 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-19 14:17:50.418108 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-05-19 14:17:50.418445 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-05-19 14:17:50.418898 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-19 14:17:50.419921 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-19 14:17:50.420609 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-19 14:17:50.420963 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-05-19 14:17:50.858848 | orchestrator | + osism apply bootstrap 2025-05-19 14:17:52.586139 | orchestrator | 2025-05-19 14:17:52 | INFO  | Task f2ec9dfb-bc63-4a13-935c-d6682c340c68 (bootstrap) was prepared for execution. 2025-05-19 14:17:52.586249 | orchestrator | 2025-05-19 14:17:52 | INFO  | It takes a moment until task f2ec9dfb-bc63-4a13-935c-d6682c340c68 (bootstrap) has been started and output is visible here. 2025-05-19 14:17:56.622861 | orchestrator | 2025-05-19 14:17:56.623834 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-19 14:17:56.625017 | orchestrator | 2025-05-19 14:17:56.625921 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-19 14:17:56.627706 | orchestrator | Monday 19 May 2025 14:17:56 +0000 (0:00:00.161) 0:00:00.161 ************ 2025-05-19 14:17:56.706647 | orchestrator | ok: [testbed-manager] 2025-05-19 14:17:56.730612 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:17:56.761807 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:17:56.786777 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:17:56.876729 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:17:56.877642 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:17:56.878729 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:17:56.880129 | orchestrator | 2025-05-19 14:17:56.881018 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 14:17:56.882102 | orchestrator | 2025-05-19 14:17:56.883431 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:17:56.884324 | orchestrator | Monday 19 May 2025 14:17:56 +0000 (0:00:00.257) 0:00:00.418 ************ 2025-05-19 14:18:00.451543 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:00.451864 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:00.452527 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:00.453126 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:00.455733 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:00.456442 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:00.457391 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:00.457963 | orchestrator | 2025-05-19 14:18:00.459180 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-19 14:18:00.460055 | orchestrator | 2025-05-19 14:18:00.460800 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:18:00.461200 | orchestrator | Monday 19 May 2025 14:18:00 +0000 (0:00:03.575) 0:00:03.994 ************ 2025-05-19 14:18:00.528534 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 14:18:00.528691 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 14:18:00.547574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-19 14:18:00.549577 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 14:18:00.572161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:18:00.572557 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 14:18:00.572987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:18:00.614258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:18:00.614708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 14:18:00.615465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 14:18:00.615794 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-19 14:18:00.876333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 14:18:00.876545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 14:18:00.880755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 14:18:00.880819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-19 14:18:00.880879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 14:18:00.881584 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:00.882244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-19 14:18:00.882871 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 14:18:00.883458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 14:18:00.883969 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:00.884547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 14:18:00.885226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 14:18:00.885973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 14:18:00.886648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-19 14:18:00.887162 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 14:18:00.889575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 14:18:00.889596 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 14:18:00.889644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 14:18:00.889665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 14:18:00.889683 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 14:18:00.889762 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 14:18:00.891750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 14:18:00.892205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:18:00.892796 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:00.893261 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-19 14:18:00.893821 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 14:18:00.894416 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 14:18:00.894940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:18:00.895348 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 14:18:00.895870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 14:18:00.896397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:18:00.896873 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 14:18:00.897458 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:00.897804 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 14:18:00.898276 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 14:18:00.898777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 14:18:00.899242 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 14:18:00.899773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 14:18:00.900262 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:00.900768 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 14:18:00.901225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 14:18:00.901824 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 14:18:00.902177 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:00.902702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 14:18:00.903205 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:00.903631 | orchestrator | 2025-05-19 14:18:00.904149 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-19 14:18:00.904498 | orchestrator | 2025-05-19 14:18:00.905102 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-19 14:18:00.905456 | orchestrator | Monday 19 May 2025 14:18:00 +0000 (0:00:00.423) 0:00:04.417 ************ 2025-05-19 14:18:02.109615 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:02.110952 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:02.111461 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:02.112242 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:02.113180 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:02.113937 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:02.114680 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:02.115245 | orchestrator | 2025-05-19 14:18:02.116467 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-19 14:18:02.117167 | orchestrator | Monday 19 May 2025 14:18:02 +0000 (0:00:01.232) 0:00:05.650 ************ 2025-05-19 14:18:03.231072 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:03.235096 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:03.235193 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:03.235208 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:03.235243 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:03.235327 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:03.235880 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:03.236509 | orchestrator | 2025-05-19 14:18:03.237385 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-19 14:18:03.238074 | orchestrator | Monday 19 May 2025 14:18:03 +0000 (0:00:01.120) 0:00:06.771 ************ 2025-05-19 14:18:03.499735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:03.500493 | orchestrator | 2025-05-19 14:18:03.500958 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-19 14:18:03.501456 | orchestrator | Monday 19 May 2025 14:18:03 +0000 (0:00:00.270) 0:00:07.041 ************ 2025-05-19 14:18:05.537725 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:05.537979 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:05.538873 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:05.540447 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:05.541080 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:05.541654 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:05.542821 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:05.543074 | orchestrator | 2025-05-19 14:18:05.543819 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-19 14:18:05.545378 | orchestrator | Monday 19 May 2025 14:18:05 +0000 (0:00:02.035) 0:00:09.077 ************ 2025-05-19 14:18:05.617820 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:05.818234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:05.819187 | orchestrator | 2025-05-19 14:18:05.820551 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-19 14:18:05.821852 | orchestrator | Monday 19 May 2025 14:18:05 +0000 (0:00:00.281) 0:00:09.358 ************ 2025-05-19 14:18:06.826505 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:06.826608 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:06.827421 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:06.828397 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:06.828811 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:06.829730 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:06.830555 | orchestrator | 2025-05-19 14:18:06.831227 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-19 14:18:06.831671 | orchestrator | Monday 19 May 2025 14:18:06 +0000 (0:00:01.005) 0:00:10.363 ************ 2025-05-19 14:18:06.897526 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:07.404533 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:07.405258 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:07.406556 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:07.409948 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:07.410970 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:07.411439 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:07.412249 | orchestrator | 2025-05-19 14:18:07.412941 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-19 14:18:07.413414 | orchestrator | Monday 19 May 2025 14:18:07 +0000 (0:00:00.582) 0:00:10.946 ************ 2025-05-19 14:18:07.512163 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:07.537071 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:07.567218 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:07.833873 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:07.834568 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:07.835026 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:07.835753 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:07.836768 | orchestrator | 2025-05-19 14:18:07.840691 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 14:18:07.842124 | orchestrator | Monday 19 May 2025 14:18:07 +0000 (0:00:00.428) 0:00:11.374 ************ 2025-05-19 14:18:07.918629 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:07.944645 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:07.971089 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:07.997298 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:08.051155 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:08.051539 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:08.051948 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:08.052485 | orchestrator | 2025-05-19 14:18:08.052943 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 14:18:08.053440 | orchestrator | Monday 19 May 2025 14:18:08 +0000 (0:00:00.219) 0:00:11.594 ************ 2025-05-19 14:18:08.364071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:08.365080 | orchestrator | 2025-05-19 14:18:08.366147 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 14:18:08.370193 | orchestrator | Monday 19 May 2025 14:18:08 +0000 (0:00:00.312) 0:00:11.906 ************ 2025-05-19 14:18:08.685859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:08.686490 | orchestrator | 2025-05-19 14:18:08.690866 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 14:18:08.691566 | orchestrator | Monday 19 May 2025 14:18:08 +0000 (0:00:00.320) 0:00:12.226 ************ 2025-05-19 14:18:10.016630 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:10.016712 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:10.017300 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:10.017876 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:10.018550 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:10.020976 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:10.021292 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:10.021932 | orchestrator | 2025-05-19 14:18:10.022479 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 14:18:10.023114 | orchestrator | Monday 19 May 2025 14:18:10 +0000 (0:00:01.329) 0:00:13.555 ************ 2025-05-19 14:18:10.090466 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:10.115276 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:10.139073 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:10.169717 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:10.221591 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:10.222088 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:10.224918 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:10.225276 | orchestrator | 2025-05-19 14:18:10.225578 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 14:18:10.225910 | orchestrator | Monday 19 May 2025 14:18:10 +0000 (0:00:00.208) 0:00:13.764 ************ 2025-05-19 14:18:10.843947 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:10.844669 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:10.845245 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:10.846502 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:10.847095 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:10.847941 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:10.848749 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:10.849665 | orchestrator | 2025-05-19 14:18:10.850209 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 14:18:10.851054 | orchestrator | Monday 19 May 2025 14:18:10 +0000 (0:00:00.620) 0:00:14.384 ************ 2025-05-19 14:18:10.938571 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:10.961198 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:10.986893 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:11.012465 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:11.078089 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:11.078295 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:11.079590 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:11.079874 | orchestrator | 2025-05-19 14:18:11.080603 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 14:18:11.081090 | orchestrator | Monday 19 May 2025 14:18:11 +0000 (0:00:00.235) 0:00:14.620 ************ 2025-05-19 14:18:11.643405 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:11.643573 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:11.644369 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:11.645219 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:11.645477 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:11.648447 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:11.649240 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:11.650218 | orchestrator | 2025-05-19 14:18:11.650895 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 14:18:11.651398 | orchestrator | Monday 19 May 2025 14:18:11 +0000 (0:00:00.565) 0:00:15.185 ************ 2025-05-19 14:18:12.950619 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:12.951930 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:12.953274 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:12.954269 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:12.955444 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:12.956516 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:12.957413 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:12.958423 | orchestrator | 2025-05-19 14:18:12.959267 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 14:18:12.960003 | orchestrator | Monday 19 May 2025 14:18:12 +0000 (0:00:01.305) 0:00:16.491 ************ 2025-05-19 14:18:13.932916 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:13.933247 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:13.935016 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:13.935888 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:13.939171 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:13.939201 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:13.942201 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:13.942230 | orchestrator | 2025-05-19 14:18:13.942889 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 14:18:13.943147 | orchestrator | Monday 19 May 2025 14:18:13 +0000 (0:00:00.982) 0:00:17.474 ************ 2025-05-19 14:18:14.200013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:14.200681 | orchestrator | 2025-05-19 14:18:14.201371 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 14:18:14.202198 | orchestrator | Monday 19 May 2025 14:18:14 +0000 (0:00:00.264) 0:00:17.739 ************ 2025-05-19 14:18:14.261746 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:15.419460 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:15.419552 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:15.419566 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:15.419578 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:15.419589 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:15.419600 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:15.419612 | orchestrator | 2025-05-19 14:18:15.419624 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 14:18:15.419636 | orchestrator | Monday 19 May 2025 14:18:15 +0000 (0:00:01.219) 0:00:18.959 ************ 2025-05-19 14:18:15.493548 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:15.520937 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:15.541395 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:15.561298 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:15.607767 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:15.608393 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:15.614520 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:15.614557 | orchestrator | 2025-05-19 14:18:15.614570 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 14:18:15.614582 | orchestrator | Monday 19 May 2025 14:18:15 +0000 (0:00:00.191) 0:00:19.151 ************ 2025-05-19 14:18:15.665883 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:15.709016 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:15.730471 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:15.787244 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:15.787498 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:15.787930 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:15.788294 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:15.789925 | orchestrator | 2025-05-19 14:18:15.789949 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 14:18:15.789962 | orchestrator | Monday 19 May 2025 14:18:15 +0000 (0:00:00.179) 0:00:19.330 ************ 2025-05-19 14:18:15.873408 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:15.892390 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:15.921875 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:15.975034 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:15.975183 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:15.975983 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:15.980946 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:15.981259 | orchestrator | 2025-05-19 14:18:15.982159 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 14:18:15.982491 | orchestrator | Monday 19 May 2025 14:18:15 +0000 (0:00:00.187) 0:00:19.517 ************ 2025-05-19 14:18:16.201104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:16.201513 | orchestrator | 2025-05-19 14:18:16.202265 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 14:18:16.202471 | orchestrator | Monday 19 May 2025 14:18:16 +0000 (0:00:00.219) 0:00:19.736 ************ 2025-05-19 14:18:16.822084 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:16.822206 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:16.822224 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:16.822236 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:16.822247 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:16.822692 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:16.823535 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:16.824477 | orchestrator | 2025-05-19 14:18:16.825085 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 14:18:16.825662 | orchestrator | Monday 19 May 2025 14:18:16 +0000 (0:00:00.622) 0:00:20.358 ************ 2025-05-19 14:18:16.899583 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:16.931008 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:16.951574 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:17.041225 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:17.042117 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:17.042954 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:17.043755 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:17.046999 | orchestrator | 2025-05-19 14:18:17.047073 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 14:18:17.047134 | orchestrator | Monday 19 May 2025 14:18:17 +0000 (0:00:00.224) 0:00:20.583 ************ 2025-05-19 14:18:18.180045 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:18.180527 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:18.181068 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:18.181571 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:18.182754 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:18.185935 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:18.185965 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:18.185976 | orchestrator | 2025-05-19 14:18:18.187160 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 14:18:18.187452 | orchestrator | Monday 19 May 2025 14:18:18 +0000 (0:00:01.132) 0:00:21.716 ************ 2025-05-19 14:18:18.757628 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:18.757736 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:18.757751 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:18.757843 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:18.758456 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:18.759156 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:18.760090 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:18.760116 | orchestrator | 2025-05-19 14:18:18.767659 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 14:18:18.767706 | orchestrator | Monday 19 May 2025 14:18:18 +0000 (0:00:00.583) 0:00:22.299 ************ 2025-05-19 14:18:20.139862 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:20.142961 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:20.143026 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:20.143708 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:20.144782 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:20.145222 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:20.146244 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:20.147044 | orchestrator | 2025-05-19 14:18:20.147461 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 14:18:20.147870 | orchestrator | Monday 19 May 2025 14:18:20 +0000 (0:00:01.378) 0:00:23.678 ************ 2025-05-19 14:18:34.201364 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:34.201523 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:34.201560 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:34.201580 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:34.202194 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:34.202610 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:34.203593 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:34.204854 | orchestrator | 2025-05-19 14:18:34.205770 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-19 14:18:34.206501 | orchestrator | Monday 19 May 2025 14:18:34 +0000 (0:00:14.056) 0:00:37.735 ************ 2025-05-19 14:18:34.278721 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:34.305045 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:34.331683 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:34.358989 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:34.437572 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:34.441569 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:34.441600 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:34.441613 | orchestrator | 2025-05-19 14:18:34.441967 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-19 14:18:34.442676 | orchestrator | Monday 19 May 2025 14:18:34 +0000 (0:00:00.242) 0:00:37.977 ************ 2025-05-19 14:18:34.520953 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:34.548063 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:34.575721 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:34.602870 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:34.660840 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:34.662474 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:34.664160 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:34.666203 | orchestrator | 2025-05-19 14:18:34.667427 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-19 14:18:34.668910 | orchestrator | Monday 19 May 2025 14:18:34 +0000 (0:00:00.223) 0:00:38.201 ************ 2025-05-19 14:18:34.744021 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:34.770792 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:34.795374 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:34.820754 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:34.886262 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:34.886593 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:34.887856 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:34.889291 | orchestrator | 2025-05-19 14:18:34.890484 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-19 14:18:34.891615 | orchestrator | Monday 19 May 2025 14:18:34 +0000 (0:00:00.225) 0:00:38.427 ************ 2025-05-19 14:18:35.156795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:35.156985 | orchestrator | 2025-05-19 14:18:35.158866 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-19 14:18:35.159343 | orchestrator | Monday 19 May 2025 14:18:35 +0000 (0:00:00.270) 0:00:38.697 ************ 2025-05-19 14:18:36.875462 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:36.876063 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:36.876439 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:36.877997 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:36.878275 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:36.879922 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:36.879949 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:36.880197 | orchestrator | 2025-05-19 14:18:36.880748 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-19 14:18:36.881483 | orchestrator | Monday 19 May 2025 14:18:36 +0000 (0:00:01.717) 0:00:40.415 ************ 2025-05-19 14:18:38.070687 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:38.072913 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:38.073486 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:38.074785 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:38.075026 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:38.075776 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:38.076773 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:38.077558 | orchestrator | 2025-05-19 14:18:38.078268 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-19 14:18:38.078976 | orchestrator | Monday 19 May 2025 14:18:38 +0000 (0:00:01.194) 0:00:41.609 ************ 2025-05-19 14:18:38.874255 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:38.874813 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:38.875155 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:38.878761 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:38.879433 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:38.879991 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:38.880633 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:38.881211 | orchestrator | 2025-05-19 14:18:38.881935 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-19 14:18:38.882709 | orchestrator | Monday 19 May 2025 14:18:38 +0000 (0:00:00.806) 0:00:42.416 ************ 2025-05-19 14:18:39.151982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:39.152086 | orchestrator | 2025-05-19 14:18:39.152484 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-19 14:18:39.153180 | orchestrator | Monday 19 May 2025 14:18:39 +0000 (0:00:00.274) 0:00:42.691 ************ 2025-05-19 14:18:40.198148 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:40.198767 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:40.200028 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:40.201264 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:40.202242 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:40.203148 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:40.205704 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:40.206096 | orchestrator | 2025-05-19 14:18:40.208014 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-19 14:18:40.209380 | orchestrator | Monday 19 May 2025 14:18:40 +0000 (0:00:01.045) 0:00:43.736 ************ 2025-05-19 14:18:40.296939 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:18:40.321472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:18:40.342762 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:18:40.494868 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:18:40.496219 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:18:40.499068 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:18:40.500963 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:18:40.501775 | orchestrator | 2025-05-19 14:18:40.503074 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-19 14:18:40.504108 | orchestrator | Monday 19 May 2025 14:18:40 +0000 (0:00:00.299) 0:00:44.035 ************ 2025-05-19 14:18:52.198735 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:52.198867 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:52.198890 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:52.198910 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:52.198929 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:52.201723 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:52.203112 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:52.203390 | orchestrator | 2025-05-19 14:18:52.203665 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-19 14:18:52.204479 | orchestrator | Monday 19 May 2025 14:18:52 +0000 (0:00:11.696) 0:00:55.732 ************ 2025-05-19 14:18:53.665460 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:53.665680 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:53.666721 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:53.670448 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:53.671189 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:53.671759 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:53.672524 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:53.677316 | orchestrator | 2025-05-19 14:18:53.677352 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-19 14:18:53.677972 | orchestrator | Monday 19 May 2025 14:18:53 +0000 (0:00:01.473) 0:00:57.206 ************ 2025-05-19 14:18:54.575914 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:54.576134 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:54.577613 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:54.578910 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:54.579887 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:54.580868 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:54.581840 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:54.583170 | orchestrator | 2025-05-19 14:18:54.584150 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-19 14:18:54.585025 | orchestrator | Monday 19 May 2025 14:18:54 +0000 (0:00:00.909) 0:00:58.115 ************ 2025-05-19 14:18:54.653984 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:54.678252 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:54.705215 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:54.733440 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:54.789828 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:54.796436 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:54.797739 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:54.798930 | orchestrator | 2025-05-19 14:18:54.800117 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-19 14:18:54.801013 | orchestrator | Monday 19 May 2025 14:18:54 +0000 (0:00:00.215) 0:00:58.330 ************ 2025-05-19 14:18:54.865551 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:54.895811 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:54.915181 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:54.945206 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:54.998088 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:54.998943 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:54.999921 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:55.000935 | orchestrator | 2025-05-19 14:18:55.002235 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-19 14:18:55.002839 | orchestrator | Monday 19 May 2025 14:18:54 +0000 (0:00:00.208) 0:00:58.539 ************ 2025-05-19 14:18:55.281595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:18:55.282979 | orchestrator | 2025-05-19 14:18:55.284086 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-19 14:18:55.285582 | orchestrator | Monday 19 May 2025 14:18:55 +0000 (0:00:00.281) 0:00:58.821 ************ 2025-05-19 14:18:56.923260 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:56.924338 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:56.927156 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:56.927404 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:56.928329 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:56.929085 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:56.929986 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:56.930758 | orchestrator | 2025-05-19 14:18:56.931453 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-19 14:18:56.933438 | orchestrator | Monday 19 May 2025 14:18:56 +0000 (0:00:01.642) 0:01:00.463 ************ 2025-05-19 14:18:57.557064 | orchestrator | changed: [testbed-manager] 2025-05-19 14:18:57.557982 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:18:57.560518 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:18:57.560556 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:18:57.561571 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:18:57.562368 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:18:57.563464 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:18:57.564383 | orchestrator | 2025-05-19 14:18:57.565507 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-19 14:18:57.565958 | orchestrator | Monday 19 May 2025 14:18:57 +0000 (0:00:00.634) 0:01:01.097 ************ 2025-05-19 14:18:57.658515 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:57.691845 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:57.726318 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:57.757888 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:57.817517 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:57.817839 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:57.818528 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:57.819544 | orchestrator | 2025-05-19 14:18:57.820634 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-19 14:18:57.821031 | orchestrator | Monday 19 May 2025 14:18:57 +0000 (0:00:00.262) 0:01:01.359 ************ 2025-05-19 14:18:59.134690 | orchestrator | ok: [testbed-manager] 2025-05-19 14:18:59.136032 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:18:59.136456 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:18:59.137737 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:18:59.138880 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:18:59.139487 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:18:59.141155 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:18:59.142921 | orchestrator | 2025-05-19 14:18:59.144771 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-19 14:18:59.145525 | orchestrator | Monday 19 May 2025 14:18:59 +0000 (0:00:01.314) 0:01:02.674 ************ 2025-05-19 14:19:01.037019 | orchestrator | changed: [testbed-manager] 2025-05-19 14:19:01.037262 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:19:01.038110 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:19:01.039186 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:19:01.039214 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:19:01.039660 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:19:01.040451 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:19:01.041818 | orchestrator | 2025-05-19 14:19:01.042378 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-19 14:19:01.043559 | orchestrator | Monday 19 May 2025 14:19:01 +0000 (0:00:01.903) 0:01:04.577 ************ 2025-05-19 14:19:03.282406 | orchestrator | ok: [testbed-manager] 2025-05-19 14:19:03.284199 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:19:03.285907 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:19:03.286998 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:19:03.287521 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:19:03.288473 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:19:03.289541 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:19:03.290117 | orchestrator | 2025-05-19 14:19:03.290774 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-19 14:19:03.291356 | orchestrator | Monday 19 May 2025 14:19:03 +0000 (0:00:02.243) 0:01:06.821 ************ 2025-05-19 14:19:37.982695 | orchestrator | ok: [testbed-manager] 2025-05-19 14:19:37.982802 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:19:37.982814 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:19:37.983578 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:19:37.984529 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:19:37.985427 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:19:37.985902 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:19:37.986878 | orchestrator | 2025-05-19 14:19:37.988699 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-19 14:19:37.989320 | orchestrator | Monday 19 May 2025 14:19:37 +0000 (0:00:34.698) 0:01:41.520 ************ 2025-05-19 14:20:55.291816 | orchestrator | changed: [testbed-manager] 2025-05-19 14:20:55.291944 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:20:55.291961 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:20:55.293619 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:20:55.294794 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:20:55.294823 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:20:55.295024 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:20:55.295245 | orchestrator | 2025-05-19 14:20:55.295853 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-19 14:20:55.296039 | orchestrator | Monday 19 May 2025 14:20:55 +0000 (0:01:17.308) 0:02:58.828 ************ 2025-05-19 14:20:56.997382 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:20:56.997548 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:20:56.999087 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:20:56.999970 | orchestrator | ok: [testbed-manager] 2025-05-19 14:20:57.001424 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:20:57.002385 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:20:57.002726 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:20:57.003457 | orchestrator | 2025-05-19 14:20:57.004475 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-19 14:20:57.005247 | orchestrator | Monday 19 May 2025 14:20:56 +0000 (0:00:01.707) 0:03:00.535 ************ 2025-05-19 14:21:08.705451 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:08.705592 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:08.706110 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:08.707697 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:08.708470 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:08.710332 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:08.711872 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:08.715076 | orchestrator | 2025-05-19 14:21:08.715535 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-19 14:21:08.716296 | orchestrator | Monday 19 May 2025 14:21:08 +0000 (0:00:11.706) 0:03:12.242 ************ 2025-05-19 14:21:09.114936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-19 14:21:09.115066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-19 14:21:09.115145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-19 14:21:09.117711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-19 14:21:09.117736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-19 14:21:09.117749 | orchestrator | 2025-05-19 14:21:09.118749 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-19 14:21:09.119518 | orchestrator | Monday 19 May 2025 14:21:09 +0000 (0:00:00.412) 0:03:12.654 ************ 2025-05-19 14:21:09.172581 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 14:21:09.201268 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:09.201887 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 14:21:09.242601 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 14:21:09.244019 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:21:09.244099 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 14:21:09.264701 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:21:09.290910 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:21:09.875371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:21:09.875483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:21:09.876823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:21:09.877156 | orchestrator | 2025-05-19 14:21:09.877686 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-19 14:21:09.878498 | orchestrator | Monday 19 May 2025 14:21:09 +0000 (0:00:00.760) 0:03:13.415 ************ 2025-05-19 14:21:09.965449 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 14:21:09.965626 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 14:21:09.965684 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 14:21:09.966792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 14:21:09.966870 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 14:21:09.967610 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 14:21:09.967920 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 14:21:09.969443 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 14:21:09.969467 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 14:21:09.969624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 14:21:09.969905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 14:21:09.970383 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 14:21:09.971008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 14:21:09.973462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 14:21:09.973524 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 14:21:09.973546 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 14:21:09.973567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 14:21:09.973694 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 14:21:10.010404 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:10.011463 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 14:21:10.012359 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 14:21:10.013689 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 14:21:10.014498 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 14:21:10.070879 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:21:10.070988 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 14:21:10.073823 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 14:21:10.074593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 14:21:10.076473 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 14:21:10.076552 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 14:21:10.076911 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 14:21:10.077828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 14:21:10.078218 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 14:21:10.079066 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 14:21:10.079512 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 14:21:10.110459 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:21:10.110562 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 14:21:10.110578 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 14:21:10.111605 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 14:21:10.111862 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 14:21:10.112979 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 14:21:10.113445 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 14:21:10.114603 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 14:21:10.115102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 14:21:10.141750 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:21:15.760027 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 14:21:15.760238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 14:21:15.761575 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 14:21:15.761946 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 14:21:15.763061 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 14:21:15.764678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 14:21:15.765958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 14:21:15.766783 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 14:21:15.767449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 14:21:15.768240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 14:21:15.769082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 14:21:15.769908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 14:21:15.770594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 14:21:15.771370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 14:21:15.772177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 14:21:15.772682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 14:21:15.773413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 14:21:15.774255 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 14:21:15.774975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 14:21:15.775424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 14:21:15.776201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 14:21:15.776588 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 14:21:15.777177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 14:21:15.777706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 14:21:15.778282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 14:21:15.778749 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 14:21:15.779615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 14:21:15.779869 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 14:21:15.780485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 14:21:15.780950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 14:21:15.781468 | orchestrator | 2025-05-19 14:21:15.781926 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-19 14:21:15.782449 | orchestrator | Monday 19 May 2025 14:21:15 +0000 (0:00:05.884) 0:03:19.299 ************ 2025-05-19 14:21:17.247041 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.248030 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.250179 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.250636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.251360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.252026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.252998 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 14:21:17.253482 | orchestrator | 2025-05-19 14:21:17.254345 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-19 14:21:17.255208 | orchestrator | Monday 19 May 2025 14:21:17 +0000 (0:00:01.487) 0:03:20.786 ************ 2025-05-19 14:21:17.309696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 14:21:17.350208 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:17.440106 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 14:21:17.770590 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 14:21:17.772877 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:21:17.773659 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:21:17.774746 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 14:21:17.776303 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:21:17.777155 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 14:21:17.778356 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 14:21:17.779479 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 14:21:17.781027 | orchestrator | 2025-05-19 14:21:17.782060 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-19 14:21:17.783026 | orchestrator | Monday 19 May 2025 14:21:17 +0000 (0:00:00.524) 0:03:21.311 ************ 2025-05-19 14:21:17.813158 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 14:21:17.848389 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:17.967959 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 14:21:17.968062 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 14:21:18.360706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:21:18.360880 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:21:18.361347 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 14:21:18.361857 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:21:18.362757 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 14:21:18.362886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 14:21:18.363873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 14:21:18.364253 | orchestrator | 2025-05-19 14:21:18.365050 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-19 14:21:18.365274 | orchestrator | Monday 19 May 2025 14:21:18 +0000 (0:00:00.591) 0:03:21.902 ************ 2025-05-19 14:21:18.447688 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:18.470444 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:21:18.496742 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:21:18.522454 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:21:18.639423 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:21:18.639515 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:21:18.639951 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:21:18.639974 | orchestrator | 2025-05-19 14:21:18.640297 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-19 14:21:18.640608 | orchestrator | Monday 19 May 2025 14:21:18 +0000 (0:00:00.278) 0:03:22.181 ************ 2025-05-19 14:21:24.304756 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:24.305287 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:24.306474 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:24.306977 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:24.308804 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:24.308867 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:24.309653 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:24.310507 | orchestrator | 2025-05-19 14:21:24.311563 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-19 14:21:24.312054 | orchestrator | Monday 19 May 2025 14:21:24 +0000 (0:00:05.665) 0:03:27.846 ************ 2025-05-19 14:21:24.393712 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-19 14:21:24.393822 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-19 14:21:24.431210 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:24.432863 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-19 14:21:24.466777 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:21:24.466876 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-19 14:21:24.499320 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:21:24.499418 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-19 14:21:24.539300 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:21:24.539468 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-19 14:21:24.602324 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:21:24.602421 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:21:24.602591 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-19 14:21:24.603279 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:21:24.603981 | orchestrator | 2025-05-19 14:21:24.604527 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-19 14:21:24.605309 | orchestrator | Monday 19 May 2025 14:21:24 +0000 (0:00:00.296) 0:03:28.142 ************ 2025-05-19 14:21:25.592561 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-19 14:21:25.593500 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-19 14:21:25.594548 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-19 14:21:25.595808 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-19 14:21:25.596751 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-19 14:21:25.597873 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-19 14:21:25.598682 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-19 14:21:25.599369 | orchestrator | 2025-05-19 14:21:25.600223 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-19 14:21:25.601664 | orchestrator | Monday 19 May 2025 14:21:25 +0000 (0:00:00.989) 0:03:29.132 ************ 2025-05-19 14:21:26.062561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:21:26.063652 | orchestrator | 2025-05-19 14:21:26.065375 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-19 14:21:26.066704 | orchestrator | Monday 19 May 2025 14:21:26 +0000 (0:00:00.471) 0:03:29.604 ************ 2025-05-19 14:21:27.296216 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:27.298511 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:27.298550 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:27.298564 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:27.298617 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:27.300250 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:27.301134 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:27.301625 | orchestrator | 2025-05-19 14:21:27.302526 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-19 14:21:27.302876 | orchestrator | Monday 19 May 2025 14:21:27 +0000 (0:00:01.232) 0:03:30.837 ************ 2025-05-19 14:21:27.912992 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:27.913094 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:27.913488 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:27.914572 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:27.914931 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:27.915718 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:27.916319 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:27.916741 | orchestrator | 2025-05-19 14:21:27.917369 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-19 14:21:27.917821 | orchestrator | Monday 19 May 2025 14:21:27 +0000 (0:00:00.616) 0:03:31.453 ************ 2025-05-19 14:21:28.622567 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:28.624334 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:28.624432 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:28.625293 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:28.625947 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:28.626520 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:28.627437 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:28.627851 | orchestrator | 2025-05-19 14:21:28.628898 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-19 14:21:28.631026 | orchestrator | Monday 19 May 2025 14:21:28 +0000 (0:00:00.706) 0:03:32.160 ************ 2025-05-19 14:21:29.174571 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:29.174681 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:29.174980 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:29.178591 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:29.178841 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:29.179680 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:29.182200 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:29.182313 | orchestrator | 2025-05-19 14:21:29.182439 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-19 14:21:29.183262 | orchestrator | Monday 19 May 2025 14:21:29 +0000 (0:00:00.555) 0:03:32.716 ************ 2025-05-19 14:21:30.164877 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662710.314766, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.164967 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662740.4827273, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.165479 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662736.0257883, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.169064 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662736.3638554, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.170090 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662735.246203, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.172260 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662737.084831, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.172287 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747662731.4037406, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.173282 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662731.4017282, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.173956 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662655.0752363, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.174934 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662658.9557903, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.175817 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662655.6161, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.176650 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662656.5853689, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.176929 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662656.9168496, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.177431 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747662650.4139025, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:21:30.177956 | orchestrator | 2025-05-19 14:21:30.178461 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-19 14:21:30.179540 | orchestrator | Monday 19 May 2025 14:21:30 +0000 (0:00:00.990) 0:03:33.706 ************ 2025-05-19 14:21:31.291061 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:31.294775 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:31.295981 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:31.296345 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:31.297156 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:31.298605 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:31.298816 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:31.299244 | orchestrator | 2025-05-19 14:21:31.300659 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-19 14:21:31.301035 | orchestrator | Monday 19 May 2025 14:21:31 +0000 (0:00:01.124) 0:03:34.831 ************ 2025-05-19 14:21:32.495658 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:32.496221 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:32.497242 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:32.498328 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:32.498879 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:32.499318 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:32.500168 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:32.500715 | orchestrator | 2025-05-19 14:21:32.501654 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-19 14:21:32.501974 | orchestrator | Monday 19 May 2025 14:21:32 +0000 (0:00:01.204) 0:03:36.035 ************ 2025-05-19 14:21:33.651605 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:33.652531 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:33.652922 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:33.656276 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:33.657203 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:33.657991 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:33.659308 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:33.659943 | orchestrator | 2025-05-19 14:21:33.660772 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-19 14:21:33.661488 | orchestrator | Monday 19 May 2025 14:21:33 +0000 (0:00:01.155) 0:03:37.191 ************ 2025-05-19 14:21:33.754433 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:21:33.789303 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:21:33.821448 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:21:33.854429 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:21:33.904615 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:21:33.905179 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:21:33.906076 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:21:33.907526 | orchestrator | 2025-05-19 14:21:33.908297 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-19 14:21:33.909163 | orchestrator | Monday 19 May 2025 14:21:33 +0000 (0:00:00.256) 0:03:37.447 ************ 2025-05-19 14:21:34.701627 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:34.702419 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:34.705789 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:34.705861 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:34.705875 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:34.705887 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:34.707073 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:34.708134 | orchestrator | 2025-05-19 14:21:34.708846 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-19 14:21:34.710304 | orchestrator | Monday 19 May 2025 14:21:34 +0000 (0:00:00.794) 0:03:38.242 ************ 2025-05-19 14:21:35.133377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:21:35.133607 | orchestrator | 2025-05-19 14:21:35.134476 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-19 14:21:35.135209 | orchestrator | Monday 19 May 2025 14:21:35 +0000 (0:00:00.432) 0:03:38.675 ************ 2025-05-19 14:21:42.828839 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:42.829274 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:42.832619 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:42.833566 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:42.833826 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:42.834852 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:42.835631 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:42.836498 | orchestrator | 2025-05-19 14:21:42.837385 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-19 14:21:42.838431 | orchestrator | Monday 19 May 2025 14:21:42 +0000 (0:00:07.694) 0:03:46.369 ************ 2025-05-19 14:21:44.149559 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:44.150002 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:44.150295 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:44.152938 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:44.153511 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:44.154284 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:44.159720 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:44.159765 | orchestrator | 2025-05-19 14:21:44.159778 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-19 14:21:44.159835 | orchestrator | Monday 19 May 2025 14:21:44 +0000 (0:00:01.312) 0:03:47.681 ************ 2025-05-19 14:21:45.209826 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:45.209918 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:45.210171 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:45.210969 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:45.211190 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:45.211510 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:45.216217 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:45.216259 | orchestrator | 2025-05-19 14:21:45.216303 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-19 14:21:45.216802 | orchestrator | Monday 19 May 2025 14:21:45 +0000 (0:00:01.069) 0:03:48.750 ************ 2025-05-19 14:21:45.751347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:21:45.752527 | orchestrator | 2025-05-19 14:21:45.753142 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-19 14:21:45.757532 | orchestrator | Monday 19 May 2025 14:21:45 +0000 (0:00:00.541) 0:03:49.292 ************ 2025-05-19 14:21:53.945884 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:53.946240 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:53.947284 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:53.952132 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:53.952190 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:53.952625 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:53.954125 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:53.956146 | orchestrator | 2025-05-19 14:21:53.956244 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-19 14:21:53.958928 | orchestrator | Monday 19 May 2025 14:21:53 +0000 (0:00:08.191) 0:03:57.484 ************ 2025-05-19 14:21:54.596867 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:54.597300 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:54.597739 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:54.598756 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:54.599755 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:54.602994 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:54.603482 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:54.604124 | orchestrator | 2025-05-19 14:21:54.604836 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-19 14:21:54.607907 | orchestrator | Monday 19 May 2025 14:21:54 +0000 (0:00:00.651) 0:03:58.136 ************ 2025-05-19 14:21:55.816352 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:55.821460 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:55.821497 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:55.821511 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:55.822226 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:55.825269 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:55.825960 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:55.826432 | orchestrator | 2025-05-19 14:21:55.827281 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-19 14:21:55.827992 | orchestrator | Monday 19 May 2025 14:21:55 +0000 (0:00:01.219) 0:03:59.355 ************ 2025-05-19 14:21:56.904407 | orchestrator | changed: [testbed-manager] 2025-05-19 14:21:56.905095 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:21:56.909112 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:21:56.910729 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:21:56.911852 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:21:56.913061 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:21:56.916471 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:21:56.917095 | orchestrator | 2025-05-19 14:21:56.917919 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-19 14:21:56.919269 | orchestrator | Monday 19 May 2025 14:21:56 +0000 (0:00:01.087) 0:04:00.443 ************ 2025-05-19 14:21:57.067683 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:57.099398 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:57.135297 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:57.197501 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:57.197971 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:57.199814 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:57.202392 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:57.203178 | orchestrator | 2025-05-19 14:21:57.204177 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-19 14:21:57.205228 | orchestrator | Monday 19 May 2025 14:21:57 +0000 (0:00:00.296) 0:04:00.739 ************ 2025-05-19 14:21:57.296504 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:57.328615 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:57.360812 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:57.402637 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:57.491549 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:57.493057 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:57.497703 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:57.498188 | orchestrator | 2025-05-19 14:21:57.499589 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-19 14:21:57.501232 | orchestrator | Monday 19 May 2025 14:21:57 +0000 (0:00:00.293) 0:04:01.033 ************ 2025-05-19 14:21:57.599310 | orchestrator | ok: [testbed-manager] 2025-05-19 14:21:57.643177 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:21:57.691327 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:21:57.727186 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:21:57.807606 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:21:57.808733 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:21:57.809690 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:21:57.813329 | orchestrator | 2025-05-19 14:21:57.813436 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-19 14:21:57.814208 | orchestrator | Monday 19 May 2025 14:21:57 +0000 (0:00:00.317) 0:04:01.350 ************ 2025-05-19 14:22:03.440764 | orchestrator | ok: [testbed-manager] 2025-05-19 14:22:03.440959 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:22:03.442688 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:22:03.445780 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:22:03.447614 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:22:03.447903 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:22:03.448626 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:22:03.450127 | orchestrator | 2025-05-19 14:22:03.450744 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-19 14:22:03.451430 | orchestrator | Monday 19 May 2025 14:22:03 +0000 (0:00:05.631) 0:04:06.982 ************ 2025-05-19 14:22:03.866256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:22:03.867672 | orchestrator | 2025-05-19 14:22:03.872957 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-19 14:22:03.874244 | orchestrator | Monday 19 May 2025 14:22:03 +0000 (0:00:00.424) 0:04:07.406 ************ 2025-05-19 14:22:03.946870 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-19 14:22:03.949738 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-19 14:22:03.989201 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-19 14:22:03.989354 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:22:03.989717 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-19 14:22:03.990210 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-19 14:22:03.990786 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-19 14:22:04.034444 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:22:04.035751 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-19 14:22:04.036835 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-19 14:22:04.087132 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:22:04.090873 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-19 14:22:04.090907 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-19 14:22:04.120865 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:22:04.209948 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-19 14:22:04.210277 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:22:04.211211 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-19 14:22:04.211880 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:22:04.212396 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-19 14:22:04.213295 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-19 14:22:04.213767 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:22:04.214616 | orchestrator | 2025-05-19 14:22:04.215170 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-19 14:22:04.215783 | orchestrator | Monday 19 May 2025 14:22:04 +0000 (0:00:00.346) 0:04:07.753 ************ 2025-05-19 14:22:04.593389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:22:04.594744 | orchestrator | 2025-05-19 14:22:04.597049 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-19 14:22:04.597920 | orchestrator | Monday 19 May 2025 14:22:04 +0000 (0:00:00.381) 0:04:08.134 ************ 2025-05-19 14:22:04.664911 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-19 14:22:04.705977 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:22:04.709428 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-19 14:22:04.709469 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-19 14:22:04.740916 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:22:04.741153 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-19 14:22:04.776120 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:22:04.777621 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-19 14:22:04.808763 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:22:04.893806 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:22:04.896600 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-19 14:22:04.897980 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:22:04.899708 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-19 14:22:04.901755 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:22:04.903298 | orchestrator | 2025-05-19 14:22:04.905742 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-19 14:22:04.907005 | orchestrator | Monday 19 May 2025 14:22:04 +0000 (0:00:00.299) 0:04:08.434 ************ 2025-05-19 14:22:05.436212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:22:05.436348 | orchestrator | 2025-05-19 14:22:05.437218 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-19 14:22:05.438824 | orchestrator | Monday 19 May 2025 14:22:05 +0000 (0:00:00.541) 0:04:08.975 ************ 2025-05-19 14:22:39.813599 | orchestrator | changed: [testbed-manager] 2025-05-19 14:22:39.813728 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:22:39.813745 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:22:39.813757 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:22:39.814964 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:22:39.817446 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:22:39.818369 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:22:39.823501 | orchestrator | 2025-05-19 14:22:39.823533 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-19 14:22:39.823546 | orchestrator | Monday 19 May 2025 14:22:39 +0000 (0:00:34.375) 0:04:43.350 ************ 2025-05-19 14:22:47.770245 | orchestrator | changed: [testbed-manager] 2025-05-19 14:22:47.770454 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:22:47.771242 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:22:47.771678 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:22:47.772460 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:22:47.773114 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:22:47.774793 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:22:47.775372 | orchestrator | 2025-05-19 14:22:47.775621 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-19 14:22:47.776574 | orchestrator | Monday 19 May 2025 14:22:47 +0000 (0:00:07.959) 0:04:51.310 ************ 2025-05-19 14:22:55.072697 | orchestrator | changed: [testbed-manager] 2025-05-19 14:22:55.073229 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:22:55.075182 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:22:55.075558 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:22:55.077327 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:22:55.078649 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:22:55.079270 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:22:55.079882 | orchestrator | 2025-05-19 14:22:55.080815 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-19 14:22:55.081547 | orchestrator | Monday 19 May 2025 14:22:55 +0000 (0:00:07.303) 0:04:58.614 ************ 2025-05-19 14:22:56.740561 | orchestrator | ok: [testbed-manager] 2025-05-19 14:22:56.740786 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:22:56.741878 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:22:56.742891 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:22:56.743905 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:22:56.745148 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:22:56.745672 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:22:56.746390 | orchestrator | 2025-05-19 14:22:56.747305 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-19 14:22:56.747669 | orchestrator | Monday 19 May 2025 14:22:56 +0000 (0:00:01.665) 0:05:00.279 ************ 2025-05-19 14:23:02.285218 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:02.288327 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:02.288381 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:02.289754 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:02.290584 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:02.291295 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:02.292098 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:02.292755 | orchestrator | 2025-05-19 14:23:02.293134 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-19 14:23:02.293627 | orchestrator | Monday 19 May 2025 14:23:02 +0000 (0:00:05.546) 0:05:05.825 ************ 2025-05-19 14:23:02.743262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:23:02.743482 | orchestrator | 2025-05-19 14:23:02.743806 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-19 14:23:02.744460 | orchestrator | Monday 19 May 2025 14:23:02 +0000 (0:00:00.458) 0:05:06.284 ************ 2025-05-19 14:23:03.531389 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:03.532094 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:03.533169 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:03.536704 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:03.537426 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:03.538144 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:03.538808 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:03.539365 | orchestrator | 2025-05-19 14:23:03.539862 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-19 14:23:03.542791 | orchestrator | Monday 19 May 2025 14:23:03 +0000 (0:00:00.787) 0:05:07.072 ************ 2025-05-19 14:23:05.129790 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:05.131126 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:23:05.131870 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:23:05.133725 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:23:05.134245 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:23:05.135928 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:23:05.136605 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:23:05.138095 | orchestrator | 2025-05-19 14:23:05.138452 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-19 14:23:05.139674 | orchestrator | Monday 19 May 2025 14:23:05 +0000 (0:00:01.598) 0:05:08.671 ************ 2025-05-19 14:23:05.997620 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:05.998449 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:06.000370 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:06.000534 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:06.001135 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:06.001963 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:06.002969 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:06.004187 | orchestrator | 2025-05-19 14:23:06.004870 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-19 14:23:06.005356 | orchestrator | Monday 19 May 2025 14:23:05 +0000 (0:00:00.866) 0:05:09.538 ************ 2025-05-19 14:23:06.062310 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:06.093468 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:06.124771 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:06.155218 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:06.185906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:06.249497 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:06.249727 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:06.251417 | orchestrator | 2025-05-19 14:23:06.251450 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-19 14:23:06.253309 | orchestrator | Monday 19 May 2025 14:23:06 +0000 (0:00:00.254) 0:05:09.792 ************ 2025-05-19 14:23:06.328231 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:06.393385 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:06.426343 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:06.456264 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:06.635715 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:06.635884 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:06.637156 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:06.637342 | orchestrator | 2025-05-19 14:23:06.638169 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-19 14:23:06.638895 | orchestrator | Monday 19 May 2025 14:23:06 +0000 (0:00:00.386) 0:05:10.178 ************ 2025-05-19 14:23:06.750417 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:06.782657 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:23:06.819781 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:23:06.855035 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:23:06.928807 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:23:06.930743 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:23:06.932125 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:23:06.937133 | orchestrator | 2025-05-19 14:23:06.937161 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-19 14:23:06.937175 | orchestrator | Monday 19 May 2025 14:23:06 +0000 (0:00:00.293) 0:05:10.471 ************ 2025-05-19 14:23:07.027933 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:07.093293 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:07.142468 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:07.193592 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:07.264709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:07.266541 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:07.266647 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:07.267668 | orchestrator | 2025-05-19 14:23:07.269360 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-19 14:23:07.270407 | orchestrator | Monday 19 May 2025 14:23:07 +0000 (0:00:00.333) 0:05:10.804 ************ 2025-05-19 14:23:07.372287 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:07.410506 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:23:07.445337 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:23:07.483545 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:23:07.555382 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:23:07.555474 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:23:07.556235 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:23:07.557032 | orchestrator | 2025-05-19 14:23:07.557927 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-19 14:23:07.558789 | orchestrator | Monday 19 May 2025 14:23:07 +0000 (0:00:00.291) 0:05:11.096 ************ 2025-05-19 14:23:07.774427 | orchestrator | ok: [testbed-manager] =>  2025-05-19 14:23:07.774822 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.806713 | orchestrator | ok: [testbed-node-3] =>  2025-05-19 14:23:07.807912 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.853820 | orchestrator | ok: [testbed-node-4] =>  2025-05-19 14:23:07.854572 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.883143 | orchestrator | ok: [testbed-node-5] =>  2025-05-19 14:23:07.883901 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.941898 | orchestrator | ok: [testbed-node-0] =>  2025-05-19 14:23:07.942168 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.943916 | orchestrator | ok: [testbed-node-1] =>  2025-05-19 14:23:07.944679 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.945671 | orchestrator | ok: [testbed-node-2] =>  2025-05-19 14:23:07.946454 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 14:23:07.947091 | orchestrator | 2025-05-19 14:23:07.948121 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-19 14:23:07.948690 | orchestrator | Monday 19 May 2025 14:23:07 +0000 (0:00:00.387) 0:05:11.483 ************ 2025-05-19 14:23:08.048081 | orchestrator | ok: [testbed-manager] =>  2025-05-19 14:23:08.048183 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.117123 | orchestrator | ok: [testbed-node-3] =>  2025-05-19 14:23:08.117346 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.148059 | orchestrator | ok: [testbed-node-4] =>  2025-05-19 14:23:08.148490 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.185119 | orchestrator | ok: [testbed-node-5] =>  2025-05-19 14:23:08.186309 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.254405 | orchestrator | ok: [testbed-node-0] =>  2025-05-19 14:23:08.254668 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.255141 | orchestrator | ok: [testbed-node-1] =>  2025-05-19 14:23:08.255965 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.256928 | orchestrator | ok: [testbed-node-2] =>  2025-05-19 14:23:08.258606 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 14:23:08.258698 | orchestrator | 2025-05-19 14:23:08.261055 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-19 14:23:08.261973 | orchestrator | Monday 19 May 2025 14:23:08 +0000 (0:00:00.313) 0:05:11.797 ************ 2025-05-19 14:23:08.340773 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:08.372049 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:08.403140 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:08.436515 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:08.466863 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:08.518362 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:08.518472 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:08.522235 | orchestrator | 2025-05-19 14:23:08.522264 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-19 14:23:08.522278 | orchestrator | Monday 19 May 2025 14:23:08 +0000 (0:00:00.263) 0:05:12.061 ************ 2025-05-19 14:23:08.577576 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:08.611167 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:08.675042 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:08.718223 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:08.774714 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:08.775699 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:08.778311 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:08.778335 | orchestrator | 2025-05-19 14:23:08.778348 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-19 14:23:08.778361 | orchestrator | Monday 19 May 2025 14:23:08 +0000 (0:00:00.255) 0:05:12.316 ************ 2025-05-19 14:23:09.202804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:23:09.203573 | orchestrator | 2025-05-19 14:23:09.207492 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-19 14:23:09.207530 | orchestrator | Monday 19 May 2025 14:23:09 +0000 (0:00:00.426) 0:05:12.743 ************ 2025-05-19 14:23:10.067124 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:10.067233 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:23:10.067700 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:23:10.068436 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:23:10.071622 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:23:10.071647 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:23:10.071659 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:23:10.071671 | orchestrator | 2025-05-19 14:23:10.071684 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-19 14:23:10.072096 | orchestrator | Monday 19 May 2025 14:23:10 +0000 (0:00:00.862) 0:05:13.606 ************ 2025-05-19 14:23:12.809968 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:12.816505 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:23:12.817236 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:23:12.818631 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:23:12.819291 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:23:12.819801 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:23:12.820357 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:23:12.820832 | orchestrator | 2025-05-19 14:23:12.821315 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-19 14:23:12.821796 | orchestrator | Monday 19 May 2025 14:23:12 +0000 (0:00:02.742) 0:05:16.348 ************ 2025-05-19 14:23:12.878077 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-19 14:23:13.102647 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-19 14:23:13.103058 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-19 14:23:13.103700 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-19 14:23:13.104812 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-19 14:23:13.105456 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-19 14:23:13.172237 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:23:13.172701 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-19 14:23:13.174107 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-19 14:23:13.269386 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:13.269711 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-19 14:23:13.270659 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-19 14:23:13.280014 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-19 14:23:13.280090 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-19 14:23:13.346531 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:13.348528 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-19 14:23:13.349341 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-19 14:23:13.352450 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-19 14:23:13.415866 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:13.416350 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-19 14:23:13.417114 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-19 14:23:13.417840 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-19 14:23:13.546852 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:13.547229 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:13.548273 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-19 14:23:13.548917 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-19 14:23:13.549760 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-19 14:23:13.551630 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:13.551916 | orchestrator | 2025-05-19 14:23:13.552417 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-19 14:23:13.552917 | orchestrator | Monday 19 May 2025 14:23:13 +0000 (0:00:00.738) 0:05:17.087 ************ 2025-05-19 14:23:19.572060 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:19.572517 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:19.573482 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:19.574230 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:19.575166 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:19.575467 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:19.576270 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:19.577168 | orchestrator | 2025-05-19 14:23:19.578702 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-19 14:23:19.578927 | orchestrator | Monday 19 May 2025 14:23:19 +0000 (0:00:06.024) 0:05:23.111 ************ 2025-05-19 14:23:20.642621 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:20.642895 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:20.642933 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:20.643190 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:20.644195 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:20.647173 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:20.647786 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:20.649072 | orchestrator | 2025-05-19 14:23:20.649948 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-19 14:23:20.650548 | orchestrator | Monday 19 May 2025 14:23:20 +0000 (0:00:01.070) 0:05:24.182 ************ 2025-05-19 14:23:28.578564 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:28.578693 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:28.579738 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:28.581175 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:28.582163 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:28.584230 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:28.584872 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:28.585636 | orchestrator | 2025-05-19 14:23:28.586702 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-19 14:23:28.587265 | orchestrator | Monday 19 May 2025 14:23:28 +0000 (0:00:07.934) 0:05:32.117 ************ 2025-05-19 14:23:31.938463 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:31.938707 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:31.939482 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:31.940492 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:31.941024 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:31.944472 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:31.944532 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:31.944553 | orchestrator | 2025-05-19 14:23:31.944574 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-19 14:23:31.944647 | orchestrator | Monday 19 May 2025 14:23:31 +0000 (0:00:03.362) 0:05:35.479 ************ 2025-05-19 14:23:33.268532 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:33.268748 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:33.270144 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:33.271138 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:33.272410 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:33.273631 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:33.275382 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:33.276325 | orchestrator | 2025-05-19 14:23:33.277653 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-19 14:23:33.278522 | orchestrator | Monday 19 May 2025 14:23:33 +0000 (0:00:01.328) 0:05:36.808 ************ 2025-05-19 14:23:34.584375 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:34.584631 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:34.585425 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:34.587698 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:34.588197 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:34.588722 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:34.589147 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:34.589562 | orchestrator | 2025-05-19 14:23:34.589947 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-19 14:23:34.590508 | orchestrator | Monday 19 May 2025 14:23:34 +0000 (0:00:01.313) 0:05:38.121 ************ 2025-05-19 14:23:34.783199 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:23:34.855798 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:23:34.920056 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:23:34.984709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:23:35.185708 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:23:35.185882 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:23:35.186724 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:35.187519 | orchestrator | 2025-05-19 14:23:35.188270 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-19 14:23:35.190138 | orchestrator | Monday 19 May 2025 14:23:35 +0000 (0:00:00.605) 0:05:38.727 ************ 2025-05-19 14:23:44.838685 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:44.838872 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:44.838894 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:44.839589 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:44.841546 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:44.842509 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:44.843422 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:44.844064 | orchestrator | 2025-05-19 14:23:44.844907 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-19 14:23:44.845799 | orchestrator | Monday 19 May 2025 14:23:44 +0000 (0:00:09.649) 0:05:48.377 ************ 2025-05-19 14:23:45.942642 | orchestrator | changed: [testbed-manager] 2025-05-19 14:23:45.942907 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:45.944143 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:45.947482 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:45.947516 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:45.950667 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:45.953062 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:45.953474 | orchestrator | 2025-05-19 14:23:45.955098 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-19 14:23:45.955287 | orchestrator | Monday 19 May 2025 14:23:45 +0000 (0:00:01.104) 0:05:49.482 ************ 2025-05-19 14:23:54.912592 | orchestrator | ok: [testbed-manager] 2025-05-19 14:23:54.912829 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:23:54.913688 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:23:54.914716 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:23:54.914840 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:23:54.915411 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:23:54.917086 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:23:54.917397 | orchestrator | 2025-05-19 14:23:54.917833 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-19 14:23:54.918336 | orchestrator | Monday 19 May 2025 14:23:54 +0000 (0:00:08.969) 0:05:58.451 ************ 2025-05-19 14:24:05.227012 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:05.227141 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:05.227826 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:05.229071 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:05.230304 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:05.231475 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:05.232183 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:05.232979 | orchestrator | 2025-05-19 14:24:05.233736 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-19 14:24:05.235623 | orchestrator | Monday 19 May 2025 14:24:05 +0000 (0:00:10.312) 0:06:08.764 ************ 2025-05-19 14:24:05.554892 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-19 14:24:06.405570 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-19 14:24:06.405745 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-19 14:24:06.405824 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-19 14:24:06.407862 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-19 14:24:06.409399 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-19 14:24:06.410357 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-19 14:24:06.412258 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-19 14:24:06.412287 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-19 14:24:06.413113 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-19 14:24:06.413801 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-19 14:24:06.414485 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-19 14:24:06.415119 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-19 14:24:06.416234 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-19 14:24:06.416416 | orchestrator | 2025-05-19 14:24:06.417213 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-19 14:24:06.417730 | orchestrator | Monday 19 May 2025 14:24:06 +0000 (0:00:01.180) 0:06:09.944 ************ 2025-05-19 14:24:06.551385 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:06.622670 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:06.688324 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:06.827783 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:06.951467 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:06.951651 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:06.953278 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:06.953300 | orchestrator | 2025-05-19 14:24:06.954295 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-19 14:24:06.955051 | orchestrator | Monday 19 May 2025 14:24:06 +0000 (0:00:00.547) 0:06:10.492 ************ 2025-05-19 14:24:10.740175 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:10.746859 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:10.746909 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:10.747330 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:10.749071 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:10.751526 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:10.752091 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:10.752795 | orchestrator | 2025-05-19 14:24:10.753493 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-19 14:24:10.754116 | orchestrator | Monday 19 May 2025 14:24:10 +0000 (0:00:03.786) 0:06:14.278 ************ 2025-05-19 14:24:10.863982 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:10.935988 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:10.998411 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:11.061825 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:11.127543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:11.239344 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:11.240056 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:11.240757 | orchestrator | 2025-05-19 14:24:11.241704 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-19 14:24:11.245108 | orchestrator | Monday 19 May 2025 14:24:11 +0000 (0:00:00.500) 0:06:14.779 ************ 2025-05-19 14:24:11.327678 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-19 14:24:11.328155 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-19 14:24:11.404027 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:11.404687 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-19 14:24:11.405691 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-19 14:24:11.471313 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:11.472423 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-19 14:24:11.473282 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-19 14:24:11.536709 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:11.536837 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-19 14:24:11.536863 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-19 14:24:11.612290 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:11.612979 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-19 14:24:11.617151 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-19 14:24:11.679199 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:11.680237 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-19 14:24:11.681435 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-19 14:24:11.792782 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:11.794826 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-19 14:24:11.796206 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-19 14:24:11.799360 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:11.799424 | orchestrator | 2025-05-19 14:24:11.800631 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-19 14:24:11.801424 | orchestrator | Monday 19 May 2025 14:24:11 +0000 (0:00:00.552) 0:06:15.331 ************ 2025-05-19 14:24:11.926005 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:11.989589 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:12.059388 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:12.124026 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:12.197148 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:12.307665 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:12.308858 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:12.313445 | orchestrator | 2025-05-19 14:24:12.313566 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-19 14:24:12.313670 | orchestrator | Monday 19 May 2025 14:24:12 +0000 (0:00:00.517) 0:06:15.849 ************ 2025-05-19 14:24:12.437832 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:12.524763 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:12.588962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:12.652615 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:12.721189 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:12.830855 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:12.834140 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:12.834173 | orchestrator | 2025-05-19 14:24:12.834187 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-19 14:24:12.834697 | orchestrator | Monday 19 May 2025 14:24:12 +0000 (0:00:00.520) 0:06:16.369 ************ 2025-05-19 14:24:13.140302 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:13.206304 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:13.269060 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:13.349190 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:13.411874 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:13.539303 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:13.539405 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:13.543190 | orchestrator | 2025-05-19 14:24:13.543246 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-19 14:24:13.543261 | orchestrator | Monday 19 May 2025 14:24:13 +0000 (0:00:00.709) 0:06:17.078 ************ 2025-05-19 14:24:15.227513 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:15.227751 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:15.230309 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:15.232311 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:15.232681 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:15.234319 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:15.234344 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:15.234357 | orchestrator | 2025-05-19 14:24:15.234614 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-19 14:24:15.235040 | orchestrator | Monday 19 May 2025 14:24:15 +0000 (0:00:01.688) 0:06:18.767 ************ 2025-05-19 14:24:16.075803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:24:16.076361 | orchestrator | 2025-05-19 14:24:16.076626 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-19 14:24:16.077018 | orchestrator | Monday 19 May 2025 14:24:16 +0000 (0:00:00.848) 0:06:19.616 ************ 2025-05-19 14:24:16.509268 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:17.102605 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:17.103194 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:17.104439 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:17.112106 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:17.112160 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:17.112166 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:17.112855 | orchestrator | 2025-05-19 14:24:17.113829 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-19 14:24:17.114353 | orchestrator | Monday 19 May 2025 14:24:17 +0000 (0:00:01.026) 0:06:20.643 ************ 2025-05-19 14:24:17.504236 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:17.937958 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:17.938200 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:17.938773 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:17.939500 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:17.941482 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:17.942132 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:17.943092 | orchestrator | 2025-05-19 14:24:17.943991 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-19 14:24:17.944757 | orchestrator | Monday 19 May 2025 14:24:17 +0000 (0:00:00.834) 0:06:21.478 ************ 2025-05-19 14:24:19.260561 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:19.260675 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:19.261498 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:19.262755 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:19.263892 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:19.266137 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:19.267795 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:19.269310 | orchestrator | 2025-05-19 14:24:19.270719 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-19 14:24:19.271978 | orchestrator | Monday 19 May 2025 14:24:19 +0000 (0:00:01.319) 0:06:22.797 ************ 2025-05-19 14:24:19.397496 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:20.651302 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:20.651411 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:20.653376 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:20.653689 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:20.654703 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:20.655218 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:20.656107 | orchestrator | 2025-05-19 14:24:20.657373 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-19 14:24:20.658462 | orchestrator | Monday 19 May 2025 14:24:20 +0000 (0:00:01.393) 0:06:24.191 ************ 2025-05-19 14:24:21.928508 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:21.929048 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:21.929869 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:21.931180 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:21.931222 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:21.931801 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:21.932532 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:21.933294 | orchestrator | 2025-05-19 14:24:21.933819 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-19 14:24:21.934197 | orchestrator | Monday 19 May 2025 14:24:21 +0000 (0:00:01.277) 0:06:25.468 ************ 2025-05-19 14:24:23.640473 | orchestrator | changed: [testbed-manager] 2025-05-19 14:24:23.641236 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:23.643430 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:23.644562 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:23.645638 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:23.646759 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:23.647641 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:23.648308 | orchestrator | 2025-05-19 14:24:23.648946 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-19 14:24:23.649839 | orchestrator | Monday 19 May 2025 14:24:23 +0000 (0:00:01.710) 0:06:27.178 ************ 2025-05-19 14:24:24.548678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:24:24.548848 | orchestrator | 2025-05-19 14:24:24.548993 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-19 14:24:24.550089 | orchestrator | Monday 19 May 2025 14:24:24 +0000 (0:00:00.911) 0:06:28.090 ************ 2025-05-19 14:24:25.861758 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:25.862775 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:25.865447 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:25.867082 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:25.868391 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:25.869239 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:25.870224 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:25.871255 | orchestrator | 2025-05-19 14:24:25.872231 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-19 14:24:25.872528 | orchestrator | Monday 19 May 2025 14:24:25 +0000 (0:00:01.312) 0:06:29.402 ************ 2025-05-19 14:24:26.933760 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:26.935133 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:26.936940 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:26.938185 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:26.938863 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:26.940575 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:26.941799 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:26.944717 | orchestrator | 2025-05-19 14:24:26.944968 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-19 14:24:26.946134 | orchestrator | Monday 19 May 2025 14:24:26 +0000 (0:00:01.069) 0:06:30.472 ************ 2025-05-19 14:24:28.226114 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:28.227240 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:28.228142 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:28.229699 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:28.230230 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:28.231196 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:28.231812 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:28.232590 | orchestrator | 2025-05-19 14:24:28.233137 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-19 14:24:28.233367 | orchestrator | Monday 19 May 2025 14:24:28 +0000 (0:00:01.293) 0:06:31.765 ************ 2025-05-19 14:24:29.412662 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:29.412832 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:29.415120 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:29.415474 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:29.416551 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:29.417490 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:29.418247 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:29.418835 | orchestrator | 2025-05-19 14:24:29.419685 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-19 14:24:29.420584 | orchestrator | Monday 19 May 2025 14:24:29 +0000 (0:00:01.186) 0:06:32.952 ************ 2025-05-19 14:24:30.685550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:24:30.686601 | orchestrator | 2025-05-19 14:24:30.687546 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.689715 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.850) 0:06:33.802 ************ 2025-05-19 14:24:30.690824 | orchestrator | 2025-05-19 14:24:30.692284 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.692984 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.036) 0:06:33.839 ************ 2025-05-19 14:24:30.694585 | orchestrator | 2025-05-19 14:24:30.695360 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.696506 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.036) 0:06:33.875 ************ 2025-05-19 14:24:30.697595 | orchestrator | 2025-05-19 14:24:30.698519 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.699680 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.043) 0:06:33.918 ************ 2025-05-19 14:24:30.700045 | orchestrator | 2025-05-19 14:24:30.700919 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.701511 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.036) 0:06:33.955 ************ 2025-05-19 14:24:30.704101 | orchestrator | 2025-05-19 14:24:30.704133 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.704145 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.036) 0:06:33.991 ************ 2025-05-19 14:24:30.704157 | orchestrator | 2025-05-19 14:24:30.704168 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 14:24:30.704179 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.195) 0:06:34.186 ************ 2025-05-19 14:24:30.704624 | orchestrator | 2025-05-19 14:24:30.705043 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 14:24:30.705580 | orchestrator | Monday 19 May 2025 14:24:30 +0000 (0:00:00.037) 0:06:34.224 ************ 2025-05-19 14:24:31.879985 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:31.880094 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:31.880376 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:31.881517 | orchestrator | 2025-05-19 14:24:31.882700 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-19 14:24:31.883120 | orchestrator | Monday 19 May 2025 14:24:31 +0000 (0:00:01.193) 0:06:35.418 ************ 2025-05-19 14:24:33.267408 | orchestrator | changed: [testbed-manager] 2025-05-19 14:24:33.267766 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:33.268770 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:33.268862 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:33.269767 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:33.271066 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:33.271587 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:33.272003 | orchestrator | 2025-05-19 14:24:33.272682 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-19 14:24:33.273212 | orchestrator | Monday 19 May 2025 14:24:33 +0000 (0:00:01.388) 0:06:36.806 ************ 2025-05-19 14:24:34.368791 | orchestrator | changed: [testbed-manager] 2025-05-19 14:24:34.369707 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:34.371792 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:34.372973 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:34.373603 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:34.374764 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:34.376101 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:34.377033 | orchestrator | 2025-05-19 14:24:34.377948 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-19 14:24:34.380274 | orchestrator | Monday 19 May 2025 14:24:34 +0000 (0:00:01.101) 0:06:37.907 ************ 2025-05-19 14:24:34.501716 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:36.564652 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:36.564762 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:36.564910 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:36.565734 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:36.566733 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:36.567737 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:36.568009 | orchestrator | 2025-05-19 14:24:36.568845 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-19 14:24:36.569435 | orchestrator | Monday 19 May 2025 14:24:36 +0000 (0:00:02.194) 0:06:40.102 ************ 2025-05-19 14:24:36.680129 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:36.681131 | orchestrator | 2025-05-19 14:24:36.681944 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-19 14:24:36.682801 | orchestrator | Monday 19 May 2025 14:24:36 +0000 (0:00:00.120) 0:06:40.222 ************ 2025-05-19 14:24:37.878611 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:37.879232 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:37.880364 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:37.881364 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:37.882195 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:37.882778 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:37.883878 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:37.884311 | orchestrator | 2025-05-19 14:24:37.885211 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-19 14:24:37.885812 | orchestrator | Monday 19 May 2025 14:24:37 +0000 (0:00:01.195) 0:06:41.418 ************ 2025-05-19 14:24:38.021612 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:38.087457 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:38.149856 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:38.219658 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:38.281691 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:38.402771 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:38.402926 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:38.403780 | orchestrator | 2025-05-19 14:24:38.404561 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-19 14:24:38.405159 | orchestrator | Monday 19 May 2025 14:24:38 +0000 (0:00:00.524) 0:06:41.942 ************ 2025-05-19 14:24:39.283851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:24:39.284265 | orchestrator | 2025-05-19 14:24:39.284836 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-19 14:24:39.286420 | orchestrator | Monday 19 May 2025 14:24:39 +0000 (0:00:00.882) 0:06:42.825 ************ 2025-05-19 14:24:39.745045 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:40.155042 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:40.155143 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:40.159222 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:40.163545 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:40.164150 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:40.168111 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:40.168706 | orchestrator | 2025-05-19 14:24:40.169408 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-19 14:24:40.169874 | orchestrator | Monday 19 May 2025 14:24:40 +0000 (0:00:00.870) 0:06:43.695 ************ 2025-05-19 14:24:42.841705 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-19 14:24:42.841958 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-19 14:24:42.846286 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-19 14:24:42.847459 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-19 14:24:42.848339 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-19 14:24:42.849608 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-19 14:24:42.850319 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-19 14:24:42.853004 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-19 14:24:42.856368 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-19 14:24:42.856962 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-19 14:24:42.857642 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-19 14:24:42.858117 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-19 14:24:42.858587 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-19 14:24:42.859098 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-19 14:24:42.859566 | orchestrator | 2025-05-19 14:24:42.862508 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-19 14:24:42.862781 | orchestrator | Monday 19 May 2025 14:24:42 +0000 (0:00:02.684) 0:06:46.380 ************ 2025-05-19 14:24:42.975276 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:43.038402 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:43.107943 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:43.173362 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:43.253838 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:43.352900 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:43.353151 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:43.354121 | orchestrator | 2025-05-19 14:24:43.358151 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-19 14:24:43.358687 | orchestrator | Monday 19 May 2025 14:24:43 +0000 (0:00:00.512) 0:06:46.893 ************ 2025-05-19 14:24:44.152937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:24:44.153302 | orchestrator | 2025-05-19 14:24:44.157408 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-19 14:24:44.157462 | orchestrator | Monday 19 May 2025 14:24:44 +0000 (0:00:00.798) 0:06:47.691 ************ 2025-05-19 14:24:44.566165 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:45.226166 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:45.227867 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:45.229081 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:45.230131 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:45.230915 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:45.231669 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:45.235140 | orchestrator | 2025-05-19 14:24:45.235172 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-19 14:24:45.235186 | orchestrator | Monday 19 May 2025 14:24:45 +0000 (0:00:01.075) 0:06:48.767 ************ 2025-05-19 14:24:45.604231 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:46.072065 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:46.072212 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:46.073475 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:46.074256 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:46.074963 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:46.077063 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:46.078316 | orchestrator | 2025-05-19 14:24:46.079156 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-19 14:24:46.080065 | orchestrator | Monday 19 May 2025 14:24:46 +0000 (0:00:00.844) 0:06:49.611 ************ 2025-05-19 14:24:46.217707 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:46.287463 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:46.346821 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:46.415272 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:46.476765 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:46.587156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:46.587254 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:46.588150 | orchestrator | 2025-05-19 14:24:46.589122 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-19 14:24:46.590350 | orchestrator | Monday 19 May 2025 14:24:46 +0000 (0:00:00.516) 0:06:50.127 ************ 2025-05-19 14:24:47.980333 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:47.980542 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:24:47.981565 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:24:47.983143 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:24:47.983654 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:24:47.984382 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:24:47.984966 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:24:47.985556 | orchestrator | 2025-05-19 14:24:47.986119 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-19 14:24:47.986949 | orchestrator | Monday 19 May 2025 14:24:47 +0000 (0:00:01.393) 0:06:51.521 ************ 2025-05-19 14:24:48.114517 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:24:48.184831 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:24:48.246861 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:24:48.309188 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:24:48.374746 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:24:48.648976 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:24:48.649261 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:24:48.649644 | orchestrator | 2025-05-19 14:24:48.650527 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-19 14:24:48.651047 | orchestrator | Monday 19 May 2025 14:24:48 +0000 (0:00:00.668) 0:06:52.189 ************ 2025-05-19 14:24:55.971919 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:55.972735 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:55.973548 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:55.975056 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:55.976496 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:55.976888 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:55.978117 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:55.979088 | orchestrator | 2025-05-19 14:24:55.980438 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-19 14:24:55.981436 | orchestrator | Monday 19 May 2025 14:24:55 +0000 (0:00:07.322) 0:06:59.512 ************ 2025-05-19 14:24:57.306252 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:57.306813 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:57.306969 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:57.307727 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:57.311388 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:57.311412 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:57.312320 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:57.313182 | orchestrator | 2025-05-19 14:24:57.313830 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-19 14:24:57.314718 | orchestrator | Monday 19 May 2025 14:24:57 +0000 (0:00:01.326) 0:07:00.839 ************ 2025-05-19 14:24:59.024129 | orchestrator | ok: [testbed-manager] 2025-05-19 14:24:59.025473 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:24:59.027704 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:24:59.028509 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:24:59.033249 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:24:59.033284 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:24:59.033296 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:24:59.033309 | orchestrator | 2025-05-19 14:24:59.033911 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-19 14:24:59.034844 | orchestrator | Monday 19 May 2025 14:24:59 +0000 (0:00:01.723) 0:07:02.563 ************ 2025-05-19 14:25:00.806767 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:00.807642 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:00.808890 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:00.810388 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:00.810970 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:00.811621 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:00.812653 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:00.813946 | orchestrator | 2025-05-19 14:25:00.814985 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 14:25:00.815553 | orchestrator | Monday 19 May 2025 14:25:00 +0000 (0:00:01.783) 0:07:04.346 ************ 2025-05-19 14:25:01.241366 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:01.675291 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:01.675806 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:01.677065 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:01.677714 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:01.678585 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:01.679497 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:01.680575 | orchestrator | 2025-05-19 14:25:01.680847 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 14:25:01.680902 | orchestrator | Monday 19 May 2025 14:25:01 +0000 (0:00:00.870) 0:07:05.217 ************ 2025-05-19 14:25:01.801094 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:25:01.869312 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:25:01.933263 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:25:01.996291 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:25:02.062223 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:25:02.431136 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:25:02.432969 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:25:02.433793 | orchestrator | 2025-05-19 14:25:02.434813 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-19 14:25:02.436075 | orchestrator | Monday 19 May 2025 14:25:02 +0000 (0:00:00.750) 0:07:05.968 ************ 2025-05-19 14:25:02.565822 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:25:02.626313 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:25:02.700090 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:25:02.761836 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:25:02.824006 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:25:02.919994 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:25:02.920897 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:25:02.922135 | orchestrator | 2025-05-19 14:25:02.923124 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-19 14:25:02.924487 | orchestrator | Monday 19 May 2025 14:25:02 +0000 (0:00:00.493) 0:07:06.461 ************ 2025-05-19 14:25:03.043526 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:03.275079 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:03.337044 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:03.398562 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:03.467586 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:03.569454 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:03.570206 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:03.570347 | orchestrator | 2025-05-19 14:25:03.571031 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-19 14:25:03.571439 | orchestrator | Monday 19 May 2025 14:25:03 +0000 (0:00:00.648) 0:07:07.109 ************ 2025-05-19 14:25:03.700122 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:03.761830 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:03.824059 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:03.891371 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:03.953468 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:04.059263 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:04.060527 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:04.062134 | orchestrator | 2025-05-19 14:25:04.065217 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-19 14:25:04.066361 | orchestrator | Monday 19 May 2025 14:25:04 +0000 (0:00:00.489) 0:07:07.599 ************ 2025-05-19 14:25:04.197465 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:04.260359 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:04.332650 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:04.396907 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:04.458933 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:04.567689 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:04.568946 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:04.570084 | orchestrator | 2025-05-19 14:25:04.571609 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-19 14:25:04.572761 | orchestrator | Monday 19 May 2025 14:25:04 +0000 (0:00:00.511) 0:07:08.110 ************ 2025-05-19 14:25:10.240341 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:10.240938 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:10.241872 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:10.242785 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:10.243642 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:10.244509 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:10.245349 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:10.245988 | orchestrator | 2025-05-19 14:25:10.246715 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-19 14:25:10.247363 | orchestrator | Monday 19 May 2025 14:25:10 +0000 (0:00:05.669) 0:07:13.780 ************ 2025-05-19 14:25:10.444112 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:25:10.506689 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:25:10.581405 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:25:10.865799 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:25:10.984159 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:25:10.984621 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:25:10.985622 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:25:10.986924 | orchestrator | 2025-05-19 14:25:10.990289 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-19 14:25:10.990317 | orchestrator | Monday 19 May 2025 14:25:10 +0000 (0:00:00.744) 0:07:14.525 ************ 2025-05-19 14:25:11.768141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:25:11.768371 | orchestrator | 2025-05-19 14:25:11.770191 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-19 14:25:11.770881 | orchestrator | Monday 19 May 2025 14:25:11 +0000 (0:00:00.783) 0:07:15.308 ************ 2025-05-19 14:25:13.506806 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:13.507262 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:13.508369 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:13.512082 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:13.512445 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:13.513458 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:13.514577 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:13.515364 | orchestrator | 2025-05-19 14:25:13.520919 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-19 14:25:13.521162 | orchestrator | Monday 19 May 2025 14:25:13 +0000 (0:00:01.737) 0:07:17.046 ************ 2025-05-19 14:25:14.641623 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:14.642237 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:14.643392 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:14.644199 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:14.645735 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:14.645757 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:14.646724 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:14.647237 | orchestrator | 2025-05-19 14:25:14.647927 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-19 14:25:14.648744 | orchestrator | Monday 19 May 2025 14:25:14 +0000 (0:00:01.136) 0:07:18.182 ************ 2025-05-19 14:25:15.133732 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:15.217981 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:15.294929 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:15.746431 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:15.747135 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:15.748380 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:15.749090 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:15.750297 | orchestrator | 2025-05-19 14:25:15.752337 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-19 14:25:15.752885 | orchestrator | Monday 19 May 2025 14:25:15 +0000 (0:00:01.101) 0:07:19.284 ************ 2025-05-19 14:25:17.437193 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.438413 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.439464 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.440279 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.441582 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.442807 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.444046 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 14:25:17.444749 | orchestrator | 2025-05-19 14:25:17.446506 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-19 14:25:17.447037 | orchestrator | Monday 19 May 2025 14:25:17 +0000 (0:00:01.689) 0:07:20.974 ************ 2025-05-19 14:25:18.209822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:25:18.211244 | orchestrator | 2025-05-19 14:25:18.212003 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-19 14:25:18.213134 | orchestrator | Monday 19 May 2025 14:25:18 +0000 (0:00:00.773) 0:07:21.748 ************ 2025-05-19 14:25:27.049635 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:27.049785 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:27.051300 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:27.051331 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:27.052031 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:27.052399 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:27.053320 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:27.055648 | orchestrator | 2025-05-19 14:25:27.056200 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-19 14:25:27.057192 | orchestrator | Monday 19 May 2025 14:25:27 +0000 (0:00:08.841) 0:07:30.589 ************ 2025-05-19 14:25:28.743101 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:28.746720 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:28.746801 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:28.746816 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:28.747231 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:28.748323 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:28.749175 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:28.749754 | orchestrator | 2025-05-19 14:25:28.750429 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-19 14:25:28.751649 | orchestrator | Monday 19 May 2025 14:25:28 +0000 (0:00:01.691) 0:07:32.281 ************ 2025-05-19 14:25:30.194744 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:30.195527 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:30.199331 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:30.199372 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:30.199384 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:30.200704 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:30.202336 | orchestrator | 2025-05-19 14:25:30.202795 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-19 14:25:30.203736 | orchestrator | Monday 19 May 2025 14:25:30 +0000 (0:00:01.453) 0:07:33.734 ************ 2025-05-19 14:25:31.435169 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:31.435975 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:31.438280 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:31.441326 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:31.445094 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:31.445131 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:31.445143 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:31.445154 | orchestrator | 2025-05-19 14:25:31.445890 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-19 14:25:31.446881 | orchestrator | 2025-05-19 14:25:31.447474 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-19 14:25:31.448294 | orchestrator | Monday 19 May 2025 14:25:31 +0000 (0:00:01.240) 0:07:34.974 ************ 2025-05-19 14:25:31.555644 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:25:31.615069 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:25:31.671034 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:25:31.738454 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:25:31.800541 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:25:31.922309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:25:31.923201 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:25:31.924732 | orchestrator | 2025-05-19 14:25:31.926296 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-19 14:25:31.927349 | orchestrator | 2025-05-19 14:25:31.927675 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-19 14:25:31.928497 | orchestrator | Monday 19 May 2025 14:25:31 +0000 (0:00:00.488) 0:07:35.463 ************ 2025-05-19 14:25:33.240184 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:33.240413 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:33.241354 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:33.242109 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:33.242715 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:33.243613 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:33.244186 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:33.244558 | orchestrator | 2025-05-19 14:25:33.245238 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-19 14:25:33.246186 | orchestrator | Monday 19 May 2025 14:25:33 +0000 (0:00:01.315) 0:07:36.778 ************ 2025-05-19 14:25:34.839904 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:34.840189 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:34.841176 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:34.844109 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:34.844149 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:34.844544 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:34.845459 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:34.846223 | orchestrator | 2025-05-19 14:25:34.847199 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-19 14:25:34.847915 | orchestrator | Monday 19 May 2025 14:25:34 +0000 (0:00:01.600) 0:07:38.378 ************ 2025-05-19 14:25:34.981977 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:25:35.044935 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:25:35.112919 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:25:35.173348 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:25:35.234997 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:25:35.609566 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:25:35.613280 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:25:35.615380 | orchestrator | 2025-05-19 14:25:35.616395 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-19 14:25:35.617471 | orchestrator | Monday 19 May 2025 14:25:35 +0000 (0:00:00.770) 0:07:39.149 ************ 2025-05-19 14:25:36.835276 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:36.837304 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:36.837330 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:36.838924 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:36.839646 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:36.840297 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:36.841621 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:36.841734 | orchestrator | 2025-05-19 14:25:36.842430 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-19 14:25:36.843036 | orchestrator | 2025-05-19 14:25:36.843748 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-19 14:25:36.844352 | orchestrator | Monday 19 May 2025 14:25:36 +0000 (0:00:01.222) 0:07:40.371 ************ 2025-05-19 14:25:37.754301 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:25:37.754523 | orchestrator | 2025-05-19 14:25:37.755339 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 14:25:37.758611 | orchestrator | Monday 19 May 2025 14:25:37 +0000 (0:00:00.921) 0:07:41.293 ************ 2025-05-19 14:25:38.160873 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:38.586934 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:38.587042 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:38.587927 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:38.588008 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:38.588967 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:38.589811 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:38.590737 | orchestrator | 2025-05-19 14:25:38.590962 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 14:25:38.591525 | orchestrator | Monday 19 May 2025 14:25:38 +0000 (0:00:00.833) 0:07:42.126 ************ 2025-05-19 14:25:39.695032 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:39.695504 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:39.697279 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:39.700054 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:39.700091 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:39.700103 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:39.700422 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:39.701526 | orchestrator | 2025-05-19 14:25:39.702195 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-19 14:25:39.703128 | orchestrator | Monday 19 May 2025 14:25:39 +0000 (0:00:01.106) 0:07:43.233 ************ 2025-05-19 14:25:40.674695 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:25:40.675620 | orchestrator | 2025-05-19 14:25:40.679742 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 14:25:40.679791 | orchestrator | Monday 19 May 2025 14:25:40 +0000 (0:00:00.979) 0:07:44.213 ************ 2025-05-19 14:25:41.079580 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:41.522978 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:41.523374 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:41.524178 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:41.525176 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:41.525865 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:41.526461 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:41.527255 | orchestrator | 2025-05-19 14:25:41.527881 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 14:25:41.528524 | orchestrator | Monday 19 May 2025 14:25:41 +0000 (0:00:00.848) 0:07:45.061 ************ 2025-05-19 14:25:41.924687 | orchestrator | changed: [testbed-manager] 2025-05-19 14:25:42.597108 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:25:42.597508 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:25:42.598789 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:25:42.599932 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:25:42.600572 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:25:42.601567 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:25:42.601971 | orchestrator | 2025-05-19 14:25:42.602441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:25:42.603013 | orchestrator | 2025-05-19 14:25:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:25:42.603190 | orchestrator | 2025-05-19 14:25:42 | INFO  | Please wait and do not abort execution. 2025-05-19 14:25:42.604351 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-19 14:25:42.604793 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 14:25:42.605241 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 14:25:42.605809 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 14:25:42.606645 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-19 14:25:42.606936 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 14:25:42.607404 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 14:25:42.608793 | orchestrator | 2025-05-19 14:25:42.609228 | orchestrator | 2025-05-19 14:25:42.609248 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:25:42.609279 | orchestrator | Monday 19 May 2025 14:25:42 +0000 (0:00:01.075) 0:07:46.137 ************ 2025-05-19 14:25:42.609631 | orchestrator | =============================================================================== 2025-05-19 14:25:42.609961 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.31s 2025-05-19 14:25:42.610722 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.70s 2025-05-19 14:25:42.610967 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.38s 2025-05-19 14:25:42.611329 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.06s 2025-05-19 14:25:42.611789 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.71s 2025-05-19 14:25:42.612057 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.70s 2025-05-19 14:25:42.612401 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.31s 2025-05-19 14:25:42.613057 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.65s 2025-05-19 14:25:42.613369 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.97s 2025-05-19 14:25:42.613387 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.84s 2025-05-19 14:25:42.613829 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.19s 2025-05-19 14:25:42.614130 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.96s 2025-05-19 14:25:42.614918 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.93s 2025-05-19 14:25:42.614938 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.69s 2025-05-19 14:25:42.615293 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.32s 2025-05-19 14:25:42.615491 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.30s 2025-05-19 14:25:42.616552 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.02s 2025-05-19 14:25:42.617263 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.88s 2025-05-19 14:25:42.617853 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.67s 2025-05-19 14:25:42.618148 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.67s 2025-05-19 14:25:43.268993 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-19 14:25:43.269112 | orchestrator | + osism apply network 2025-05-19 14:25:45.207159 | orchestrator | 2025-05-19 14:25:45 | INFO  | Task 6afde990-014b-49aa-bdec-40469c9cbe51 (network) was prepared for execution. 2025-05-19 14:25:45.207264 | orchestrator | 2025-05-19 14:25:45 | INFO  | It takes a moment until task 6afde990-014b-49aa-bdec-40469c9cbe51 (network) has been started and output is visible here. 2025-05-19 14:25:49.442423 | orchestrator | 2025-05-19 14:25:49.445701 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-19 14:25:49.447158 | orchestrator | 2025-05-19 14:25:49.448143 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-19 14:25:49.449096 | orchestrator | Monday 19 May 2025 14:25:49 +0000 (0:00:00.279) 0:00:00.279 ************ 2025-05-19 14:25:49.589065 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:49.669950 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:49.745800 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:49.820382 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:50.000137 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:50.131620 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:50.132244 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:50.133365 | orchestrator | 2025-05-19 14:25:50.136732 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-19 14:25:50.136759 | orchestrator | Monday 19 May 2025 14:25:50 +0000 (0:00:00.689) 0:00:00.968 ************ 2025-05-19 14:25:51.297364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:25:51.297591 | orchestrator | 2025-05-19 14:25:51.300895 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-19 14:25:51.300978 | orchestrator | Monday 19 May 2025 14:25:51 +0000 (0:00:01.164) 0:00:02.133 ************ 2025-05-19 14:25:53.127585 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:53.130463 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:53.130888 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:53.131441 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:53.132403 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:53.133846 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:53.134684 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:53.135328 | orchestrator | 2025-05-19 14:25:53.135958 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-19 14:25:53.136607 | orchestrator | Monday 19 May 2025 14:25:53 +0000 (0:00:01.832) 0:00:03.965 ************ 2025-05-19 14:25:54.862157 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:25:54.862352 | orchestrator | ok: [testbed-manager] 2025-05-19 14:25:54.863582 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:25:54.867051 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:25:54.867596 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:25:54.868787 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:25:54.869458 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:25:54.870329 | orchestrator | 2025-05-19 14:25:54.870944 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-19 14:25:54.872197 | orchestrator | Monday 19 May 2025 14:25:54 +0000 (0:00:01.730) 0:00:05.696 ************ 2025-05-19 14:25:55.369906 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-19 14:25:55.370313 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-19 14:25:55.811870 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-19 14:25:55.812651 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-19 14:25:55.813225 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-19 14:25:55.814542 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-19 14:25:55.815027 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-19 14:25:55.816001 | orchestrator | 2025-05-19 14:25:55.817005 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-19 14:25:55.817446 | orchestrator | Monday 19 May 2025 14:25:55 +0000 (0:00:00.955) 0:00:06.651 ************ 2025-05-19 14:25:59.124102 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 14:25:59.124516 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 14:25:59.125302 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:25:59.127107 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:25:59.128103 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 14:25:59.128727 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 14:25:59.129287 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:25:59.130077 | orchestrator | 2025-05-19 14:25:59.130452 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-19 14:25:59.131540 | orchestrator | Monday 19 May 2025 14:25:59 +0000 (0:00:03.309) 0:00:09.961 ************ 2025-05-19 14:26:00.717308 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:00.717489 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:26:00.718437 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:26:00.718957 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:26:00.720296 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:26:00.720887 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:26:00.721485 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:26:00.722098 | orchestrator | 2025-05-19 14:26:00.723141 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-19 14:26:00.723411 | orchestrator | Monday 19 May 2025 14:26:00 +0000 (0:00:01.593) 0:00:11.554 ************ 2025-05-19 14:26:02.557273 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:26:02.557401 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:26:02.558071 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 14:26:02.559000 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 14:26:02.559939 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 14:26:02.560726 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:26:02.562158 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 14:26:02.563758 | orchestrator | 2025-05-19 14:26:02.564676 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-19 14:26:02.565629 | orchestrator | Monday 19 May 2025 14:26:02 +0000 (0:00:01.840) 0:00:13.395 ************ 2025-05-19 14:26:02.967010 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:03.243997 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:26:03.694581 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:26:03.694751 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:26:03.697125 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:26:03.698319 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:26:03.699703 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:26:03.701719 | orchestrator | 2025-05-19 14:26:03.702135 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-19 14:26:03.703118 | orchestrator | Monday 19 May 2025 14:26:03 +0000 (0:00:01.132) 0:00:14.528 ************ 2025-05-19 14:26:03.858359 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:26:03.940278 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:04.024174 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:04.106259 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:04.180258 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:04.327026 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:04.327901 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:04.329573 | orchestrator | 2025-05-19 14:26:04.331568 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-19 14:26:04.332518 | orchestrator | Monday 19 May 2025 14:26:04 +0000 (0:00:00.638) 0:00:15.166 ************ 2025-05-19 14:26:06.385138 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:06.385711 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:26:06.385839 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:26:06.389696 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:26:06.390664 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:26:06.391041 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:26:06.391742 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:26:06.392584 | orchestrator | 2025-05-19 14:26:06.393311 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-19 14:26:06.394058 | orchestrator | Monday 19 May 2025 14:26:06 +0000 (0:00:02.053) 0:00:17.220 ************ 2025-05-19 14:26:06.634609 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:06.716991 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:06.798257 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:06.880246 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:07.254493 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:07.256312 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:07.256463 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-19 14:26:07.259734 | orchestrator | 2025-05-19 14:26:07.260170 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-19 14:26:07.261252 | orchestrator | Monday 19 May 2025 14:26:07 +0000 (0:00:00.873) 0:00:18.094 ************ 2025-05-19 14:26:08.912878 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:08.914014 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:26:08.916716 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:26:08.916763 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:26:08.918057 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:26:08.919710 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:26:08.920751 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:26:08.922397 | orchestrator | 2025-05-19 14:26:08.923133 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-19 14:26:08.923818 | orchestrator | Monday 19 May 2025 14:26:08 +0000 (0:00:01.653) 0:00:19.747 ************ 2025-05-19 14:26:10.132747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:26:10.133616 | orchestrator | 2025-05-19 14:26:10.136876 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-19 14:26:10.136901 | orchestrator | Monday 19 May 2025 14:26:10 +0000 (0:00:01.220) 0:00:20.968 ************ 2025-05-19 14:26:10.853101 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:11.291143 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:26:11.294733 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:26:11.294768 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:26:11.294798 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:26:11.294855 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:26:11.296357 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:26:11.296825 | orchestrator | 2025-05-19 14:26:11.297663 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-19 14:26:11.298611 | orchestrator | Monday 19 May 2025 14:26:11 +0000 (0:00:01.157) 0:00:22.126 ************ 2025-05-19 14:26:11.455071 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:11.541073 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:26:11.627193 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:26:11.710514 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:26:11.792397 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:26:11.936403 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:26:11.940646 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:26:11.941557 | orchestrator | 2025-05-19 14:26:11.942561 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-19 14:26:11.943400 | orchestrator | Monday 19 May 2025 14:26:11 +0000 (0:00:00.649) 0:00:22.775 ************ 2025-05-19 14:26:12.274992 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:12.275545 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:12.585223 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:12.585361 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:12.585667 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:12.586444 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:12.689373 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:12.689828 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:12.690940 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:12.691987 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:13.138349 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:13.138475 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:13.139551 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 14:26:13.140376 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 14:26:13.141516 | orchestrator | 2025-05-19 14:26:13.142859 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-19 14:26:13.143625 | orchestrator | Monday 19 May 2025 14:26:13 +0000 (0:00:01.197) 0:00:23.973 ************ 2025-05-19 14:26:13.305740 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:26:13.391034 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:13.467260 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:13.546120 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:13.625805 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:13.752127 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:13.752286 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:13.754193 | orchestrator | 2025-05-19 14:26:13.754914 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-19 14:26:13.756143 | orchestrator | Monday 19 May 2025 14:26:13 +0000 (0:00:00.618) 0:00:24.591 ************ 2025-05-19 14:26:17.231130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-05-19 14:26:17.231244 | orchestrator | 2025-05-19 14:26:17.231325 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-19 14:26:17.232189 | orchestrator | Monday 19 May 2025 14:26:17 +0000 (0:00:03.474) 0:00:28.066 ************ 2025-05-19 14:26:21.966301 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.966415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.966432 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.967322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.967502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.967952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.969105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.969127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:21.969522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.970548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.970570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.970905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.971383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.971934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:21.972438 | orchestrator | 2025-05-19 14:26:21.972729 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-19 14:26:21.973200 | orchestrator | Monday 19 May 2025 14:26:21 +0000 (0:00:04.733) 0:00:32.800 ************ 2025-05-19 14:26:26.584276 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.585225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.585300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.586695 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.588947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.589191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.590893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.591696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.592269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-19 14:26:26.593307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.593980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.594388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.595096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.595480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-19 14:26:26.595847 | orchestrator | 2025-05-19 14:26:26.596675 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-19 14:26:26.597403 | orchestrator | Monday 19 May 2025 14:26:26 +0000 (0:00:04.621) 0:00:37.421 ************ 2025-05-19 14:26:27.734916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:26:27.735424 | orchestrator | 2025-05-19 14:26:27.736045 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-19 14:26:27.737181 | orchestrator | Monday 19 May 2025 14:26:27 +0000 (0:00:01.147) 0:00:38.569 ************ 2025-05-19 14:26:28.187752 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:28.270747 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:26:28.707123 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:26:28.707234 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:26:28.715911 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:26:28.719784 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:26:28.719923 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:26:28.720384 | orchestrator | 2025-05-19 14:26:28.720876 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-19 14:26:28.722415 | orchestrator | Monday 19 May 2025 14:26:28 +0000 (0:00:00.975) 0:00:39.545 ************ 2025-05-19 14:26:28.800886 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:28.801492 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:28.802298 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:28.803104 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:28.892396 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:26:28.893521 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:28.894165 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:28.895032 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:28.895892 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:28.984038 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:28.984576 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:28.985522 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:28.986362 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:28.987926 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:29.261106 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:29.262456 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:29.263443 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:29.264507 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:29.265899 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:29.360154 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:29.360303 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:29.361537 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:29.362314 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:29.460473 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:29.460793 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:29.463642 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:29.463667 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:29.463679 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:30.671874 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:30.672495 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:30.673069 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 14:26:30.674482 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 14:26:30.675593 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 14:26:30.676473 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 14:26:30.677009 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:30.677741 | orchestrator | 2025-05-19 14:26:30.678000 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-19 14:26:30.678873 | orchestrator | Monday 19 May 2025 14:26:30 +0000 (0:00:01.963) 0:00:41.508 ************ 2025-05-19 14:26:30.836524 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:26:30.915869 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:30.997601 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:31.077182 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:31.163081 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:31.282476 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:31.283509 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:31.287751 | orchestrator | 2025-05-19 14:26:31.287863 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-19 14:26:31.288575 | orchestrator | Monday 19 May 2025 14:26:31 +0000 (0:00:00.613) 0:00:42.122 ************ 2025-05-19 14:26:31.619521 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:26:31.706359 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:26:31.789295 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:26:31.869388 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:26:31.956244 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:26:31.987434 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:26:31.987785 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:26:31.989001 | orchestrator | 2025-05-19 14:26:31.991075 | orchestrator | 2025-05-19 14:26:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:26:31.991110 | orchestrator | 2025-05-19 14:26:31 | INFO  | Please wait and do not abort execution. 2025-05-19 14:26:31.991214 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:26:31.991580 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:26:31.992582 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.993181 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.994153 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.995546 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.995887 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.996957 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:26:31.997165 | orchestrator | 2025-05-19 14:26:31.997828 | orchestrator | 2025-05-19 14:26:31.998328 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:26:31.999286 | orchestrator | Monday 19 May 2025 14:26:31 +0000 (0:00:00.704) 0:00:42.827 ************ 2025-05-19 14:26:32.000294 | orchestrator | =============================================================================== 2025-05-19 14:26:32.000568 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.73s 2025-05-19 14:26:32.002134 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.62s 2025-05-19 14:26:32.002301 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.47s 2025-05-19 14:26:32.002714 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2025-05-19 14:26:32.003171 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2025-05-19 14:26:32.003606 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.96s 2025-05-19 14:26:32.004094 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.84s 2025-05-19 14:26:32.004672 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.83s 2025-05-19 14:26:32.005077 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.73s 2025-05-19 14:26:32.006442 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2025-05-19 14:26:32.006782 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-05-19 14:26:32.007058 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2025-05-19 14:26:32.007628 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-05-19 14:26:32.007902 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-05-19 14:26:32.008847 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-05-19 14:26:32.009236 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.15s 2025-05-19 14:26:32.009802 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-05-19 14:26:32.010371 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-05-19 14:26:32.010886 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-05-19 14:26:32.011401 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-05-19 14:26:32.555256 | orchestrator | + osism apply wireguard 2025-05-19 14:26:34.268875 | orchestrator | 2025-05-19 14:26:34 | INFO  | Task ffc34cc0-a8fc-4ece-a7cd-596514b11b22 (wireguard) was prepared for execution. 2025-05-19 14:26:34.268994 | orchestrator | 2025-05-19 14:26:34 | INFO  | It takes a moment until task ffc34cc0-a8fc-4ece-a7cd-596514b11b22 (wireguard) has been started and output is visible here. 2025-05-19 14:26:38.339042 | orchestrator | 2025-05-19 14:26:38.340566 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-19 14:26:38.342674 | orchestrator | 2025-05-19 14:26:38.343605 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-19 14:26:38.344332 | orchestrator | Monday 19 May 2025 14:26:38 +0000 (0:00:00.225) 0:00:00.225 ************ 2025-05-19 14:26:39.776829 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:39.777046 | orchestrator | 2025-05-19 14:26:39.778344 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-19 14:26:39.779182 | orchestrator | Monday 19 May 2025 14:26:39 +0000 (0:00:01.437) 0:00:01.663 ************ 2025-05-19 14:26:45.983872 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:45.984890 | orchestrator | 2025-05-19 14:26:45.986409 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-19 14:26:45.987553 | orchestrator | Monday 19 May 2025 14:26:45 +0000 (0:00:06.208) 0:00:07.871 ************ 2025-05-19 14:26:46.508633 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:46.509618 | orchestrator | 2025-05-19 14:26:46.510498 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-19 14:26:46.511238 | orchestrator | Monday 19 May 2025 14:26:46 +0000 (0:00:00.521) 0:00:08.392 ************ 2025-05-19 14:26:46.913494 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:46.913792 | orchestrator | 2025-05-19 14:26:46.914772 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-19 14:26:46.915833 | orchestrator | Monday 19 May 2025 14:26:46 +0000 (0:00:00.408) 0:00:08.801 ************ 2025-05-19 14:26:47.549432 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:47.549573 | orchestrator | 2025-05-19 14:26:47.550244 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-19 14:26:47.551505 | orchestrator | Monday 19 May 2025 14:26:47 +0000 (0:00:00.622) 0:00:09.424 ************ 2025-05-19 14:26:47.924808 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:47.925231 | orchestrator | 2025-05-19 14:26:47.925342 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-19 14:26:47.925421 | orchestrator | Monday 19 May 2025 14:26:47 +0000 (0:00:00.389) 0:00:09.813 ************ 2025-05-19 14:26:48.314684 | orchestrator | ok: [testbed-manager] 2025-05-19 14:26:48.314962 | orchestrator | 2025-05-19 14:26:48.315702 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-19 14:26:48.316616 | orchestrator | Monday 19 May 2025 14:26:48 +0000 (0:00:00.386) 0:00:10.200 ************ 2025-05-19 14:26:49.519969 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:49.520073 | orchestrator | 2025-05-19 14:26:49.520088 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-19 14:26:49.520102 | orchestrator | Monday 19 May 2025 14:26:49 +0000 (0:00:01.199) 0:00:11.399 ************ 2025-05-19 14:26:50.392519 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 14:26:50.394919 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:50.394953 | orchestrator | 2025-05-19 14:26:50.394967 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-19 14:26:50.394980 | orchestrator | Monday 19 May 2025 14:26:50 +0000 (0:00:00.876) 0:00:12.276 ************ 2025-05-19 14:26:52.094360 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:52.096068 | orchestrator | 2025-05-19 14:26:52.096117 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-19 14:26:52.096178 | orchestrator | Monday 19 May 2025 14:26:52 +0000 (0:00:01.705) 0:00:13.981 ************ 2025-05-19 14:26:52.933131 | orchestrator | changed: [testbed-manager] 2025-05-19 14:26:52.933246 | orchestrator | 2025-05-19 14:26:52.935029 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:26:52.935063 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:26:52.935209 | orchestrator | 2025-05-19 14:26:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:26:52.935267 | orchestrator | 2025-05-19 14:26:52 | INFO  | Please wait and do not abort execution. 2025-05-19 14:26:52.935339 | orchestrator | 2025-05-19 14:26:52.935856 | orchestrator | 2025-05-19 14:26:52.936366 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:26:52.937401 | orchestrator | Monday 19 May 2025 14:26:52 +0000 (0:00:00.840) 0:00:14.821 ************ 2025-05-19 14:26:52.937815 | orchestrator | =============================================================================== 2025-05-19 14:26:52.938110 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.21s 2025-05-19 14:26:52.938789 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.71s 2025-05-19 14:26:52.939675 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-05-19 14:26:52.939927 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-05-19 14:26:52.940230 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-05-19 14:26:52.941213 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2025-05-19 14:26:52.941386 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.62s 2025-05-19 14:26:52.941560 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2025-05-19 14:26:52.941918 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-05-19 14:26:52.942198 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2025-05-19 14:26:52.942447 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2025-05-19 14:26:53.502222 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-19 14:26:53.537033 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-19 14:26:53.537130 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-19 14:26:53.615864 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 194 0 --:--:-- --:--:-- --:--:-- 197 2025-05-19 14:26:53.629875 | orchestrator | + osism apply --environment custom workarounds 2025-05-19 14:26:55.278915 | orchestrator | 2025-05-19 14:26:55 | INFO  | Trying to run play workarounds in environment custom 2025-05-19 14:26:55.334143 | orchestrator | 2025-05-19 14:26:55 | INFO  | Task d156440e-d585-46ed-a602-f9fa7bfe12b7 (workarounds) was prepared for execution. 2025-05-19 14:26:55.334239 | orchestrator | 2025-05-19 14:26:55 | INFO  | It takes a moment until task d156440e-d585-46ed-a602-f9fa7bfe12b7 (workarounds) has been started and output is visible here. 2025-05-19 14:26:59.297606 | orchestrator | 2025-05-19 14:26:59.297840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:26:59.298994 | orchestrator | 2025-05-19 14:26:59.302889 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-19 14:26:59.303014 | orchestrator | Monday 19 May 2025 14:26:59 +0000 (0:00:00.151) 0:00:00.151 ************ 2025-05-19 14:26:59.467984 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-19 14:26:59.552044 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-19 14:26:59.633791 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-19 14:26:59.718177 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-19 14:26:59.902216 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-19 14:27:00.063978 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-19 14:27:00.064115 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-19 14:27:00.064656 | orchestrator | 2025-05-19 14:27:00.066870 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-19 14:27:00.067511 | orchestrator | 2025-05-19 14:27:00.067850 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 14:27:00.068566 | orchestrator | Monday 19 May 2025 14:27:00 +0000 (0:00:00.768) 0:00:00.919 ************ 2025-05-19 14:27:02.662317 | orchestrator | ok: [testbed-manager] 2025-05-19 14:27:02.662671 | orchestrator | 2025-05-19 14:27:02.662871 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-19 14:27:02.663179 | orchestrator | 2025-05-19 14:27:02.665026 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 14:27:02.665361 | orchestrator | Monday 19 May 2025 14:27:02 +0000 (0:00:02.592) 0:00:03.512 ************ 2025-05-19 14:27:04.522237 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:04.526279 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:04.526314 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:04.526326 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:04.526337 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:04.526349 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:04.526579 | orchestrator | 2025-05-19 14:27:04.526912 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-19 14:27:04.527329 | orchestrator | 2025-05-19 14:27:04.529288 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-19 14:27:04.529536 | orchestrator | Monday 19 May 2025 14:27:04 +0000 (0:00:01.862) 0:00:05.374 ************ 2025-05-19 14:27:06.043071 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.043237 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.043950 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.046451 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.047917 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.048706 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 14:27:06.049845 | orchestrator | 2025-05-19 14:27:06.050638 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-19 14:27:06.051629 | orchestrator | Monday 19 May 2025 14:27:06 +0000 (0:00:01.520) 0:00:06.895 ************ 2025-05-19 14:27:09.742213 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:27:09.742933 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:27:09.743636 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:27:09.744590 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:27:09.746825 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:27:09.747099 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:27:09.748417 | orchestrator | 2025-05-19 14:27:09.749327 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-19 14:27:09.750380 | orchestrator | Monday 19 May 2025 14:27:09 +0000 (0:00:03.702) 0:00:10.597 ************ 2025-05-19 14:27:09.898269 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:27:09.972287 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:27:10.049806 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:27:10.128235 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:27:10.433876 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:27:10.434280 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:27:10.435803 | orchestrator | 2025-05-19 14:27:10.436860 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-19 14:27:10.437843 | orchestrator | 2025-05-19 14:27:10.438928 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-19 14:27:10.440028 | orchestrator | Monday 19 May 2025 14:27:10 +0000 (0:00:00.690) 0:00:11.287 ************ 2025-05-19 14:27:12.005649 | orchestrator | changed: [testbed-manager] 2025-05-19 14:27:12.006920 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:27:12.008448 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:27:12.010460 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:27:12.011177 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:27:12.012587 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:27:12.013282 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:27:12.014297 | orchestrator | 2025-05-19 14:27:12.015303 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-19 14:27:12.015903 | orchestrator | Monday 19 May 2025 14:27:12 +0000 (0:00:01.572) 0:00:12.860 ************ 2025-05-19 14:27:13.598422 | orchestrator | changed: [testbed-manager] 2025-05-19 14:27:13.598575 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:27:13.598662 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:27:13.600246 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:27:13.601563 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:27:13.603046 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:27:13.603954 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:27:13.604534 | orchestrator | 2025-05-19 14:27:13.605120 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-19 14:27:13.606080 | orchestrator | Monday 19 May 2025 14:27:13 +0000 (0:00:01.586) 0:00:14.447 ************ 2025-05-19 14:27:15.038830 | orchestrator | ok: [testbed-manager] 2025-05-19 14:27:15.038941 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:15.041357 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:15.042163 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:15.042687 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:15.043781 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:15.044291 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:15.044996 | orchestrator | 2025-05-19 14:27:15.045771 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-19 14:27:15.046372 | orchestrator | Monday 19 May 2025 14:27:15 +0000 (0:00:01.446) 0:00:15.894 ************ 2025-05-19 14:27:16.594419 | orchestrator | changed: [testbed-manager] 2025-05-19 14:27:16.594657 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:27:16.597080 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:27:16.597143 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:27:16.597158 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:27:16.597169 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:27:16.597180 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:27:16.597192 | orchestrator | 2025-05-19 14:27:16.597221 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-19 14:27:16.597287 | orchestrator | Monday 19 May 2025 14:27:16 +0000 (0:00:01.551) 0:00:17.445 ************ 2025-05-19 14:27:16.739637 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:27:16.803620 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:27:16.876042 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:27:16.942914 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:27:17.009198 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:27:17.112520 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:27:17.115790 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:27:17.115825 | orchestrator | 2025-05-19 14:27:17.115840 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-19 14:27:17.115852 | orchestrator | 2025-05-19 14:27:17.116581 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-19 14:27:17.117640 | orchestrator | Monday 19 May 2025 14:27:17 +0000 (0:00:00.521) 0:00:17.967 ************ 2025-05-19 14:27:19.617942 | orchestrator | ok: [testbed-manager] 2025-05-19 14:27:19.618887 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:19.620154 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:19.621916 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:19.622932 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:19.623285 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:19.623791 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:19.624700 | orchestrator | 2025-05-19 14:27:19.625129 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:27:19.625447 | orchestrator | 2025-05-19 14:27:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:27:19.625856 | orchestrator | 2025-05-19 14:27:19 | INFO  | Please wait and do not abort execution. 2025-05-19 14:27:19.626894 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:27:19.627189 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.627953 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.628538 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.629233 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.629629 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.630287 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:19.630660 | orchestrator | 2025-05-19 14:27:19.631080 | orchestrator | 2025-05-19 14:27:19.631447 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:27:19.631881 | orchestrator | Monday 19 May 2025 14:27:19 +0000 (0:00:02.503) 0:00:20.471 ************ 2025-05-19 14:27:19.632238 | orchestrator | =============================================================================== 2025-05-19 14:27:19.632934 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2025-05-19 14:27:19.633355 | orchestrator | Apply netplan configuration --------------------------------------------- 2.59s 2025-05-19 14:27:19.633734 | orchestrator | Install python3-docker -------------------------------------------------- 2.50s 2025-05-19 14:27:19.634498 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2025-05-19 14:27:19.634523 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-05-19 14:27:19.634669 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2025-05-19 14:27:19.635184 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.55s 2025-05-19 14:27:19.635497 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-05-19 14:27:19.635852 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.45s 2025-05-19 14:27:19.636239 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-05-19 14:27:19.636497 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-05-19 14:27:19.636839 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.52s 2025-05-19 14:27:19.998073 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-19 14:27:21.545300 | orchestrator | 2025-05-19 14:27:21 | INFO  | Task 4022de14-46da-4ee4-807a-a31422e37f64 (reboot) was prepared for execution. 2025-05-19 14:27:21.545432 | orchestrator | 2025-05-19 14:27:21 | INFO  | It takes a moment until task 4022de14-46da-4ee4-807a-a31422e37f64 (reboot) has been started and output is visible here. 2025-05-19 14:27:25.570191 | orchestrator | 2025-05-19 14:27:25.570802 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:25.571475 | orchestrator | 2025-05-19 14:27:25.572912 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:25.573649 | orchestrator | Monday 19 May 2025 14:27:25 +0000 (0:00:00.203) 0:00:00.203 ************ 2025-05-19 14:27:25.659358 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:27:25.660112 | orchestrator | 2025-05-19 14:27:25.660626 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:25.661528 | orchestrator | Monday 19 May 2025 14:27:25 +0000 (0:00:00.092) 0:00:00.295 ************ 2025-05-19 14:27:26.616124 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:27:26.617619 | orchestrator | 2025-05-19 14:27:26.618014 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:26.621018 | orchestrator | Monday 19 May 2025 14:27:26 +0000 (0:00:00.955) 0:00:01.251 ************ 2025-05-19 14:27:26.719684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:27:26.722920 | orchestrator | 2025-05-19 14:27:26.723208 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:26.723841 | orchestrator | 2025-05-19 14:27:26.723971 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:26.724482 | orchestrator | Monday 19 May 2025 14:27:26 +0000 (0:00:00.102) 0:00:01.353 ************ 2025-05-19 14:27:26.840864 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:27:26.841878 | orchestrator | 2025-05-19 14:27:26.842097 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:26.842578 | orchestrator | Monday 19 May 2025 14:27:26 +0000 (0:00:00.116) 0:00:01.470 ************ 2025-05-19 14:27:27.492538 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:27:27.494009 | orchestrator | 2025-05-19 14:27:27.495233 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:27.496339 | orchestrator | Monday 19 May 2025 14:27:27 +0000 (0:00:00.658) 0:00:02.128 ************ 2025-05-19 14:27:27.619124 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:27:27.620137 | orchestrator | 2025-05-19 14:27:27.621216 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:27.622230 | orchestrator | 2025-05-19 14:27:27.623940 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:27.624533 | orchestrator | Monday 19 May 2025 14:27:27 +0000 (0:00:00.124) 0:00:02.253 ************ 2025-05-19 14:27:27.797303 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:27:27.798297 | orchestrator | 2025-05-19 14:27:27.799636 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:27.800312 | orchestrator | Monday 19 May 2025 14:27:27 +0000 (0:00:00.180) 0:00:02.433 ************ 2025-05-19 14:27:28.457328 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:27:28.460061 | orchestrator | 2025-05-19 14:27:28.460162 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:28.460937 | orchestrator | Monday 19 May 2025 14:27:28 +0000 (0:00:00.659) 0:00:03.092 ************ 2025-05-19 14:27:28.565211 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:27:28.565308 | orchestrator | 2025-05-19 14:27:28.566551 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:28.566647 | orchestrator | 2025-05-19 14:27:28.567251 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:28.567921 | orchestrator | Monday 19 May 2025 14:27:28 +0000 (0:00:00.105) 0:00:03.198 ************ 2025-05-19 14:27:28.659812 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:27:28.661503 | orchestrator | 2025-05-19 14:27:28.663063 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:28.663761 | orchestrator | Monday 19 May 2025 14:27:28 +0000 (0:00:00.098) 0:00:03.296 ************ 2025-05-19 14:27:29.336212 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:27:29.336317 | orchestrator | 2025-05-19 14:27:29.336653 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:29.340948 | orchestrator | Monday 19 May 2025 14:27:29 +0000 (0:00:00.675) 0:00:03.972 ************ 2025-05-19 14:27:29.445995 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:27:29.446140 | orchestrator | 2025-05-19 14:27:29.447621 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:29.447646 | orchestrator | 2025-05-19 14:27:29.447971 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:29.449766 | orchestrator | Monday 19 May 2025 14:27:29 +0000 (0:00:00.110) 0:00:04.082 ************ 2025-05-19 14:27:29.548519 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:27:29.550281 | orchestrator | 2025-05-19 14:27:29.550319 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:29.551268 | orchestrator | Monday 19 May 2025 14:27:29 +0000 (0:00:00.100) 0:00:04.183 ************ 2025-05-19 14:27:30.228600 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:27:30.230259 | orchestrator | 2025-05-19 14:27:30.231607 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:30.232376 | orchestrator | Monday 19 May 2025 14:27:30 +0000 (0:00:00.680) 0:00:04.863 ************ 2025-05-19 14:27:30.344063 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:27:30.344464 | orchestrator | 2025-05-19 14:27:30.345578 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 14:27:30.347152 | orchestrator | 2025-05-19 14:27:30.347843 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 14:27:30.348888 | orchestrator | Monday 19 May 2025 14:27:30 +0000 (0:00:00.114) 0:00:04.978 ************ 2025-05-19 14:27:30.431977 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:27:30.432189 | orchestrator | 2025-05-19 14:27:30.434694 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 14:27:30.435305 | orchestrator | Monday 19 May 2025 14:27:30 +0000 (0:00:00.089) 0:00:05.068 ************ 2025-05-19 14:27:31.080253 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:27:31.080499 | orchestrator | 2025-05-19 14:27:31.080533 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 14:27:31.080694 | orchestrator | Monday 19 May 2025 14:27:31 +0000 (0:00:00.644) 0:00:05.713 ************ 2025-05-19 14:27:31.111162 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:27:31.111952 | orchestrator | 2025-05-19 14:27:31.111988 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:27:31.112046 | orchestrator | 2025-05-19 14:27:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:27:31.112062 | orchestrator | 2025-05-19 14:27:31 | INFO  | Please wait and do not abort execution. 2025-05-19 14:27:31.112317 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.112817 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.113451 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.114537 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.114900 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.115223 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:27:31.115781 | orchestrator | 2025-05-19 14:27:31.116158 | orchestrator | 2025-05-19 14:27:31.117036 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:27:31.118193 | orchestrator | Monday 19 May 2025 14:27:31 +0000 (0:00:00.035) 0:00:05.748 ************ 2025-05-19 14:27:31.118346 | orchestrator | =============================================================================== 2025-05-19 14:27:31.118681 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2025-05-19 14:27:31.119279 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2025-05-19 14:27:31.119620 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-05-19 14:27:31.617054 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-19 14:27:33.337347 | orchestrator | 2025-05-19 14:27:33 | INFO  | Task 7ece8b61-7c4a-45cd-ad2e-87618356d59a (wait-for-connection) was prepared for execution. 2025-05-19 14:27:33.337450 | orchestrator | 2025-05-19 14:27:33 | INFO  | It takes a moment until task 7ece8b61-7c4a-45cd-ad2e-87618356d59a (wait-for-connection) has been started and output is visible here. 2025-05-19 14:27:37.369214 | orchestrator | 2025-05-19 14:27:37.369749 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-19 14:27:37.369789 | orchestrator | 2025-05-19 14:27:37.372021 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-19 14:27:37.372477 | orchestrator | Monday 19 May 2025 14:27:37 +0000 (0:00:00.228) 0:00:00.228 ************ 2025-05-19 14:27:49.815591 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:49.815750 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:49.816193 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:49.817557 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:49.818310 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:49.819360 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:49.820203 | orchestrator | 2025-05-19 14:27:49.820941 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:27:49.821707 | orchestrator | 2025-05-19 14:27:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:27:49.821745 | orchestrator | 2025-05-19 14:27:49 | INFO  | Please wait and do not abort execution. 2025-05-19 14:27:49.822173 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.823408 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.823814 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.824115 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.824363 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.825064 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:27:49.825316 | orchestrator | 2025-05-19 14:27:49.825545 | orchestrator | 2025-05-19 14:27:49.825929 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:27:49.826277 | orchestrator | Monday 19 May 2025 14:27:49 +0000 (0:00:12.456) 0:00:12.685 ************ 2025-05-19 14:27:49.826660 | orchestrator | =============================================================================== 2025-05-19 14:27:49.827083 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.46s 2025-05-19 14:27:50.341802 | orchestrator | + osism apply hddtemp 2025-05-19 14:27:52.096518 | orchestrator | 2025-05-19 14:27:52 | INFO  | Task ad9d63e0-05a3-4178-8eae-4ff4d3402818 (hddtemp) was prepared for execution. 2025-05-19 14:27:52.096788 | orchestrator | 2025-05-19 14:27:52 | INFO  | It takes a moment until task ad9d63e0-05a3-4178-8eae-4ff4d3402818 (hddtemp) has been started and output is visible here. 2025-05-19 14:27:56.304440 | orchestrator | 2025-05-19 14:27:56.304643 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-19 14:27:56.305506 | orchestrator | 2025-05-19 14:27:56.307277 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-19 14:27:56.311102 | orchestrator | Monday 19 May 2025 14:27:56 +0000 (0:00:00.245) 0:00:00.245 ************ 2025-05-19 14:27:56.435058 | orchestrator | ok: [testbed-manager] 2025-05-19 14:27:56.500349 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:56.565737 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:56.631259 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:56.771764 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:56.883982 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:56.884715 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:56.885978 | orchestrator | 2025-05-19 14:27:56.886791 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-19 14:27:56.887999 | orchestrator | Monday 19 May 2025 14:27:56 +0000 (0:00:00.579) 0:00:00.825 ************ 2025-05-19 14:27:57.904009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:27:57.904978 | orchestrator | 2025-05-19 14:27:57.905257 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-19 14:27:57.907052 | orchestrator | Monday 19 May 2025 14:27:57 +0000 (0:00:01.019) 0:00:01.844 ************ 2025-05-19 14:27:59.764864 | orchestrator | ok: [testbed-manager] 2025-05-19 14:27:59.765029 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:27:59.766704 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:27:59.768344 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:27:59.769849 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:27:59.770767 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:27:59.771565 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:27:59.772641 | orchestrator | 2025-05-19 14:27:59.773388 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-19 14:27:59.774203 | orchestrator | Monday 19 May 2025 14:27:59 +0000 (0:00:01.861) 0:00:03.705 ************ 2025-05-19 14:28:00.250880 | orchestrator | changed: [testbed-manager] 2025-05-19 14:28:00.324651 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:28:00.740427 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:28:00.741628 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:28:00.743265 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:28:00.744415 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:28:00.745557 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:28:00.747334 | orchestrator | 2025-05-19 14:28:00.748249 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-19 14:28:00.749749 | orchestrator | Monday 19 May 2025 14:28:00 +0000 (0:00:00.973) 0:00:04.679 ************ 2025-05-19 14:28:01.860698 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:28:01.863920 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:28:01.865738 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:28:01.866722 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:28:01.867755 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:28:01.868826 | orchestrator | ok: [testbed-manager] 2025-05-19 14:28:01.869702 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:28:01.870967 | orchestrator | 2025-05-19 14:28:01.871565 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-19 14:28:01.872155 | orchestrator | Monday 19 May 2025 14:28:01 +0000 (0:00:01.121) 0:00:05.800 ************ 2025-05-19 14:28:02.283005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:28:02.367077 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:28:02.458180 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:28:02.557454 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:28:02.687448 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:28:02.687569 | orchestrator | changed: [testbed-manager] 2025-05-19 14:28:02.689094 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:28:02.692010 | orchestrator | 2025-05-19 14:28:02.692711 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-19 14:28:02.694308 | orchestrator | Monday 19 May 2025 14:28:02 +0000 (0:00:00.827) 0:00:06.628 ************ 2025-05-19 14:28:15.320775 | orchestrator | changed: [testbed-manager] 2025-05-19 14:28:15.320858 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:28:15.320909 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:28:15.324829 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:28:15.324862 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:28:15.324871 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:28:15.324879 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:28:15.324886 | orchestrator | 2025-05-19 14:28:15.325566 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-19 14:28:15.326037 | orchestrator | Monday 19 May 2025 14:28:15 +0000 (0:00:12.630) 0:00:19.259 ************ 2025-05-19 14:28:16.504024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:28:16.504295 | orchestrator | 2025-05-19 14:28:16.507917 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-19 14:28:16.507948 | orchestrator | Monday 19 May 2025 14:28:16 +0000 (0:00:01.183) 0:00:20.442 ************ 2025-05-19 14:28:18.388415 | orchestrator | changed: [testbed-manager] 2025-05-19 14:28:18.388533 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:28:18.389469 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:28:18.390214 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:28:18.391717 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:28:18.392962 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:28:18.394959 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:28:18.395943 | orchestrator | 2025-05-19 14:28:18.396483 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:28:18.397767 | orchestrator | 2025-05-19 14:28:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:28:18.397796 | orchestrator | 2025-05-19 14:28:18 | INFO  | Please wait and do not abort execution. 2025-05-19 14:28:18.398489 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:28:18.400155 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.400177 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.400838 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.401746 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.402851 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.404287 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:18.406278 | orchestrator | 2025-05-19 14:28:18.411118 | orchestrator | 2025-05-19 14:28:18.412501 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:28:18.412955 | orchestrator | Monday 19 May 2025 14:28:18 +0000 (0:00:01.886) 0:00:22.329 ************ 2025-05-19 14:28:18.413815 | orchestrator | =============================================================================== 2025-05-19 14:28:18.414441 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.63s 2025-05-19 14:28:18.414960 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-05-19 14:28:18.415600 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.86s 2025-05-19 14:28:18.416030 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2025-05-19 14:28:18.416845 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-05-19 14:28:18.417311 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2025-05-19 14:28:18.417833 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.97s 2025-05-19 14:28:18.418531 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-05-19 14:28:18.418824 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2025-05-19 14:28:18.971853 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-19 14:28:20.466968 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 14:28:20.467050 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 14:28:20.467058 | orchestrator | + local max_attempts=60 2025-05-19 14:28:20.467065 | orchestrator | + local name=ceph-ansible 2025-05-19 14:28:20.467070 | orchestrator | + local attempt_num=1 2025-05-19 14:28:20.467957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 14:28:20.501415 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:28:20.501462 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 14:28:20.501467 | orchestrator | + local max_attempts=60 2025-05-19 14:28:20.501472 | orchestrator | + local name=kolla-ansible 2025-05-19 14:28:20.501477 | orchestrator | + local attempt_num=1 2025-05-19 14:28:20.501939 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 14:28:20.535384 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:28:20.535425 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 14:28:20.535430 | orchestrator | + local max_attempts=60 2025-05-19 14:28:20.535434 | orchestrator | + local name=osism-ansible 2025-05-19 14:28:20.535439 | orchestrator | + local attempt_num=1 2025-05-19 14:28:20.536725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 14:28:20.570469 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 14:28:20.570515 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 14:28:20.570521 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 14:28:20.729301 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-19 14:28:20.882926 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-19 14:28:21.059428 | orchestrator | ARA in osism-ansible already disabled. 2025-05-19 14:28:21.222299 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-19 14:28:21.222439 | orchestrator | + osism apply gather-facts 2025-05-19 14:28:22.942202 | orchestrator | 2025-05-19 14:28:22 | INFO  | Task 32abced6-910d-4432-a62f-9650981dcba0 (gather-facts) was prepared for execution. 2025-05-19 14:28:22.942309 | orchestrator | 2025-05-19 14:28:22 | INFO  | It takes a moment until task 32abced6-910d-4432-a62f-9650981dcba0 (gather-facts) has been started and output is visible here. 2025-05-19 14:28:26.835644 | orchestrator | 2025-05-19 14:28:26.838246 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 14:28:26.839070 | orchestrator | 2025-05-19 14:28:26.841613 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:28:26.842399 | orchestrator | Monday 19 May 2025 14:28:26 +0000 (0:00:00.169) 0:00:00.169 ************ 2025-05-19 14:28:31.687457 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:28:31.688188 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:28:31.689559 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:28:31.692579 | orchestrator | ok: [testbed-manager] 2025-05-19 14:28:31.692607 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:28:31.693253 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:28:31.694946 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:28:31.695586 | orchestrator | 2025-05-19 14:28:31.696783 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 14:28:31.698148 | orchestrator | 2025-05-19 14:28:31.699020 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 14:28:31.699732 | orchestrator | Monday 19 May 2025 14:28:31 +0000 (0:00:04.853) 0:00:05.023 ************ 2025-05-19 14:28:31.830747 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:28:31.906505 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:28:31.979854 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:28:32.054926 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:28:32.130150 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:28:32.173431 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:28:32.173849 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:28:32.174120 | orchestrator | 2025-05-19 14:28:32.174824 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:28:32.175216 | orchestrator | 2025-05-19 14:28:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:28:32.175467 | orchestrator | 2025-05-19 14:28:32 | INFO  | Please wait and do not abort execution. 2025-05-19 14:28:32.176805 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.177210 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.177712 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.178249 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.178628 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.179029 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.179410 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:28:32.179763 | orchestrator | 2025-05-19 14:28:32.180006 | orchestrator | 2025-05-19 14:28:32.181663 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:28:32.182000 | orchestrator | Monday 19 May 2025 14:28:32 +0000 (0:00:00.487) 0:00:05.510 ************ 2025-05-19 14:28:32.182772 | orchestrator | =============================================================================== 2025-05-19 14:28:32.183688 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-05-19 14:28:32.184111 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-05-19 14:28:32.730869 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-19 14:28:32.743535 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-19 14:28:32.759243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-19 14:28:32.774693 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-19 14:28:32.786114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-19 14:28:32.802406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-19 14:28:32.817809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-19 14:28:32.831362 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-19 14:28:32.848131 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-19 14:28:32.865159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-19 14:28:32.883612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-19 14:28:32.902712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-19 14:28:32.916383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-19 14:28:32.928270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-19 14:28:32.947936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-19 14:28:32.962199 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-19 14:28:32.977725 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-19 14:28:32.991845 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-19 14:28:33.007284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-19 14:28:33.028419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-19 14:28:33.042614 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-19 14:28:33.490269 | orchestrator | ok: Runtime: 0:25:39.644453 2025-05-19 14:28:33.600249 | 2025-05-19 14:28:33.600431 | TASK [Deploy services] 2025-05-19 14:28:34.138263 | orchestrator | skipping: Conditional result was False 2025-05-19 14:28:34.150019 | 2025-05-19 14:28:34.150160 | TASK [Deploy in a nutshell] 2025-05-19 14:28:34.841417 | orchestrator | + set -e 2025-05-19 14:28:34.841599 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 14:28:34.841621 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 14:28:34.841667 | orchestrator | ++ INTERACTIVE=false 2025-05-19 14:28:34.841682 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 14:28:34.841695 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 14:28:34.841708 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 14:28:34.841753 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 14:28:34.841781 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 14:28:34.841794 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 14:28:34.841809 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 14:28:34.841821 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 14:28:34.841840 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 14:28:34.841851 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 14:28:34.841872 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 14:28:34.841883 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 14:28:34.841897 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 14:28:34.841908 | orchestrator | ++ export ARA=false 2025-05-19 14:28:34.841919 | orchestrator | ++ ARA=false 2025-05-19 14:28:34.841930 | orchestrator | ++ export TEMPEST=false 2025-05-19 14:28:34.841942 | orchestrator | ++ TEMPEST=false 2025-05-19 14:28:34.841953 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 14:28:34.841964 | orchestrator | ++ IS_ZUUL=true 2025-05-19 14:28:34.841989 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:28:34.842000 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 14:28:34.842011 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 14:28:34.842072 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 14:28:34.842084 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 14:28:34.842094 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 14:28:34.842105 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 14:28:34.842116 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 14:28:34.842127 | orchestrator | 2025-05-19 14:28:34.842138 | orchestrator | # PULL IMAGES 2025-05-19 14:28:34.842150 | orchestrator | 2025-05-19 14:28:34.842161 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 14:28:34.842171 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 14:28:34.842182 | orchestrator | + echo 2025-05-19 14:28:34.842193 | orchestrator | + echo '# PULL IMAGES' 2025-05-19 14:28:34.842210 | orchestrator | + echo 2025-05-19 14:28:34.842954 | orchestrator | ++ semver latest 7.0.0 2025-05-19 14:28:34.899949 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 14:28:34.900030 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 14:28:34.900042 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-19 14:28:36.492077 | orchestrator | 2025-05-19 14:28:36 | INFO  | Trying to run play pull-images in environment custom 2025-05-19 14:28:36.548481 | orchestrator | 2025-05-19 14:28:36 | INFO  | Task fdbc59d0-4f8c-48ee-a3c3-9dfa990489e9 (pull-images) was prepared for execution. 2025-05-19 14:28:36.548523 | orchestrator | 2025-05-19 14:28:36 | INFO  | It takes a moment until task fdbc59d0-4f8c-48ee-a3c3-9dfa990489e9 (pull-images) has been started and output is visible here. 2025-05-19 14:28:40.193273 | orchestrator | 2025-05-19 14:28:40.193332 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-19 14:28:40.193338 | orchestrator | 2025-05-19 14:28:40.195144 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-19 14:28:40.195426 | orchestrator | Monday 19 May 2025 14:28:40 +0000 (0:00:00.112) 0:00:00.112 ************ 2025-05-19 14:29:46.827885 | orchestrator | changed: [testbed-manager] 2025-05-19 14:29:46.828253 | orchestrator | 2025-05-19 14:29:46.829008 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-19 14:29:46.830372 | orchestrator | Monday 19 May 2025 14:29:46 +0000 (0:01:06.635) 0:01:06.748 ************ 2025-05-19 14:30:37.623091 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-19 14:30:37.623773 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-19 14:30:37.625270 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-19 14:30:37.626802 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-19 14:30:37.628059 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-19 14:30:37.629380 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-19 14:30:37.630282 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-19 14:30:37.631262 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-19 14:30:37.632034 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-19 14:30:37.633125 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-19 14:30:37.633903 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-19 14:30:37.634896 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-19 14:30:37.635484 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-19 14:30:37.635970 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-19 14:30:37.636154 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-19 14:30:37.636664 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-19 14:30:37.637160 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-19 14:30:37.637608 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-19 14:30:37.637960 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-19 14:30:37.638347 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-19 14:30:37.638705 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-19 14:30:37.639179 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-19 14:30:37.639568 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-19 14:30:37.639908 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-19 14:30:37.640370 | orchestrator | 2025-05-19 14:30:37.640777 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:30:37.641041 | orchestrator | 2025-05-19 14:30:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:30:37.641097 | orchestrator | 2025-05-19 14:30:37 | INFO  | Please wait and do not abort execution. 2025-05-19 14:30:37.641852 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:30:37.642212 | orchestrator | 2025-05-19 14:30:37.642429 | orchestrator | 2025-05-19 14:30:37.642684 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:30:37.642963 | orchestrator | Monday 19 May 2025 14:30:37 +0000 (0:00:50.795) 0:01:57.543 ************ 2025-05-19 14:30:37.643305 | orchestrator | =============================================================================== 2025-05-19 14:30:37.643631 | orchestrator | Pull keystone image ---------------------------------------------------- 66.64s 2025-05-19 14:30:37.644075 | orchestrator | Pull other images ------------------------------------------------------ 50.80s 2025-05-19 14:30:39.915731 | orchestrator | 2025-05-19 14:30:39 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-19 14:30:39.977052 | orchestrator | 2025-05-19 14:30:39 | INFO  | Task d52f0728-afd1-4e73-b9ec-bab205713577 (wipe-partitions) was prepared for execution. 2025-05-19 14:30:39.977174 | orchestrator | 2025-05-19 14:30:39 | INFO  | It takes a moment until task d52f0728-afd1-4e73-b9ec-bab205713577 (wipe-partitions) has been started and output is visible here. 2025-05-19 14:30:43.966434 | orchestrator | 2025-05-19 14:30:43.966545 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-19 14:30:43.966562 | orchestrator | 2025-05-19 14:30:43.966827 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-19 14:30:43.972678 | orchestrator | Monday 19 May 2025 14:30:43 +0000 (0:00:00.131) 0:00:00.131 ************ 2025-05-19 14:30:44.568813 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:30:44.568903 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:30:44.569406 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:30:44.571207 | orchestrator | 2025-05-19 14:30:44.572012 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-19 14:30:44.572274 | orchestrator | Monday 19 May 2025 14:30:44 +0000 (0:00:00.605) 0:00:00.736 ************ 2025-05-19 14:30:44.719076 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:30:44.814761 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:30:44.814848 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:30:44.815290 | orchestrator | 2025-05-19 14:30:44.815917 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-19 14:30:44.816470 | orchestrator | Monday 19 May 2025 14:30:44 +0000 (0:00:00.246) 0:00:00.983 ************ 2025-05-19 14:30:45.505172 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:30:45.506920 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:30:45.506998 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:30:45.509776 | orchestrator | 2025-05-19 14:30:45.510360 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-19 14:30:45.511036 | orchestrator | Monday 19 May 2025 14:30:45 +0000 (0:00:00.688) 0:00:01.672 ************ 2025-05-19 14:30:45.663219 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:30:45.777900 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:30:45.778167 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:30:45.778518 | orchestrator | 2025-05-19 14:30:45.778930 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-19 14:30:45.779484 | orchestrator | Monday 19 May 2025 14:30:45 +0000 (0:00:00.273) 0:00:01.945 ************ 2025-05-19 14:30:46.983052 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 14:30:46.983194 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 14:30:46.983272 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 14:30:46.983744 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 14:30:46.985017 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 14:30:46.985045 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 14:30:46.985057 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 14:30:46.985399 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 14:30:46.986814 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 14:30:46.987746 | orchestrator | 2025-05-19 14:30:46.988189 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-19 14:30:46.988680 | orchestrator | Monday 19 May 2025 14:30:46 +0000 (0:00:01.203) 0:00:03.149 ************ 2025-05-19 14:30:48.290132 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 14:30:48.290639 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 14:30:48.290712 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 14:30:48.296267 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 14:30:48.296673 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 14:30:48.296829 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 14:30:48.297130 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 14:30:48.298421 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 14:30:48.298450 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 14:30:48.298462 | orchestrator | 2025-05-19 14:30:48.298966 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-19 14:30:48.299970 | orchestrator | Monday 19 May 2025 14:30:48 +0000 (0:00:01.304) 0:00:04.454 ************ 2025-05-19 14:30:50.596943 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 14:30:50.597867 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 14:30:50.597914 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 14:30:50.597927 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 14:30:50.598085 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 14:30:50.598151 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 14:30:50.601538 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 14:30:50.601616 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 14:30:50.601670 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 14:30:50.601685 | orchestrator | 2025-05-19 14:30:50.601777 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-19 14:30:50.602013 | orchestrator | Monday 19 May 2025 14:30:50 +0000 (0:00:02.309) 0:00:06.764 ************ 2025-05-19 14:30:51.193268 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:30:51.193368 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:30:51.193443 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:30:51.193679 | orchestrator | 2025-05-19 14:30:51.193952 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-19 14:30:51.194247 | orchestrator | Monday 19 May 2025 14:30:51 +0000 (0:00:00.594) 0:00:07.358 ************ 2025-05-19 14:30:51.788185 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:30:51.788346 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:30:51.788726 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:30:51.788990 | orchestrator | 2025-05-19 14:30:51.791163 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:30:51.791350 | orchestrator | 2025-05-19 14:30:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:30:51.791406 | orchestrator | 2025-05-19 14:30:51 | INFO  | Please wait and do not abort execution. 2025-05-19 14:30:51.791641 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:30:51.791666 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:30:51.793254 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:30:51.793290 | orchestrator | 2025-05-19 14:30:51.793477 | orchestrator | 2025-05-19 14:30:51.793733 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:30:51.794088 | orchestrator | Monday 19 May 2025 14:30:51 +0000 (0:00:00.599) 0:00:07.958 ************ 2025-05-19 14:30:51.794356 | orchestrator | =============================================================================== 2025-05-19 14:30:51.794714 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.31s 2025-05-19 14:30:51.794998 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-05-19 14:30:51.795310 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-05-19 14:30:51.795601 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.69s 2025-05-19 14:30:51.795909 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2025-05-19 14:30:51.796182 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-05-19 14:30:51.796437 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-05-19 14:30:51.796741 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-05-19 14:30:51.797043 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-05-19 14:30:53.918760 | orchestrator | 2025-05-19 14:30:53 | INFO  | Task 3dd64c5a-0644-429f-800b-defd2e3092f4 (facts) was prepared for execution. 2025-05-19 14:30:53.918841 | orchestrator | 2025-05-19 14:30:53 | INFO  | It takes a moment until task 3dd64c5a-0644-429f-800b-defd2e3092f4 (facts) has been started and output is visible here. 2025-05-19 14:30:57.524278 | orchestrator | 2025-05-19 14:30:57.524394 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 14:30:57.524411 | orchestrator | 2025-05-19 14:30:57.524423 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 14:30:57.524435 | orchestrator | Monday 19 May 2025 14:30:57 +0000 (0:00:00.204) 0:00:00.204 ************ 2025-05-19 14:30:58.397669 | orchestrator | ok: [testbed-manager] 2025-05-19 14:30:58.399053 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:30:58.399403 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:30:58.400897 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:30:58.401620 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:30:58.402770 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:30:58.404547 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:30:58.405040 | orchestrator | 2025-05-19 14:30:58.406999 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 14:30:58.409262 | orchestrator | Monday 19 May 2025 14:30:58 +0000 (0:00:00.878) 0:00:01.082 ************ 2025-05-19 14:30:58.557673 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:30:58.628230 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:30:58.699472 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:30:58.768850 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:30:58.836702 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:30:59.490192 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:30:59.490279 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:30:59.490546 | orchestrator | 2025-05-19 14:30:59.491191 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 14:30:59.491650 | orchestrator | 2025-05-19 14:30:59.492049 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:30:59.495859 | orchestrator | Monday 19 May 2025 14:30:59 +0000 (0:00:01.096) 0:00:02.179 ************ 2025-05-19 14:31:04.949129 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:31:04.950515 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:31:04.950552 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:31:04.950602 | orchestrator | ok: [testbed-manager] 2025-05-19 14:31:04.950615 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:31:04.952212 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:31:04.952235 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:31:04.952246 | orchestrator | 2025-05-19 14:31:04.956013 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 14:31:04.956040 | orchestrator | 2025-05-19 14:31:04.956375 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 14:31:04.956690 | orchestrator | Monday 19 May 2025 14:31:04 +0000 (0:00:05.452) 0:00:07.632 ************ 2025-05-19 14:31:05.127718 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:31:05.210090 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:31:05.317602 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:31:05.399993 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:31:05.479404 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:05.525977 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:05.526176 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:05.526258 | orchestrator | 2025-05-19 14:31:05.527134 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:31:05.527787 | orchestrator | 2025-05-19 14:31:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:31:05.527816 | orchestrator | 2025-05-19 14:31:05 | INFO  | Please wait and do not abort execution. 2025-05-19 14:31:05.532163 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.532672 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.533188 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.533615 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.534711 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.536444 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.536705 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:31:05.536883 | orchestrator | 2025-05-19 14:31:05.537360 | orchestrator | 2025-05-19 14:31:05.537980 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:31:05.538375 | orchestrator | Monday 19 May 2025 14:31:05 +0000 (0:00:00.577) 0:00:08.209 ************ 2025-05-19 14:31:05.539433 | orchestrator | =============================================================================== 2025-05-19 14:31:05.540547 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.45s 2025-05-19 14:31:05.542005 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-05-19 14:31:05.542803 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.88s 2025-05-19 14:31:05.543716 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-05-19 14:31:07.688912 | orchestrator | 2025-05-19 14:31:07 | INFO  | Task 8f115c76-4cbc-4c82-b104-2c28a9974aa1 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-19 14:31:07.688972 | orchestrator | 2025-05-19 14:31:07 | INFO  | It takes a moment until task 8f115c76-4cbc-4c82-b104-2c28a9974aa1 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-19 14:31:10.931314 | orchestrator | 2025-05-19 14:31:10.932300 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 14:31:10.932329 | orchestrator | 2025-05-19 14:31:10.932357 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:31:10.932551 | orchestrator | Monday 19 May 2025 14:31:10 +0000 (0:00:00.243) 0:00:00.243 ************ 2025-05-19 14:31:11.139520 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:11.140030 | orchestrator | 2025-05-19 14:31:11.140785 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:31:11.143860 | orchestrator | Monday 19 May 2025 14:31:11 +0000 (0:00:00.210) 0:00:00.453 ************ 2025-05-19 14:31:11.372733 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:31:11.375006 | orchestrator | 2025-05-19 14:31:11.375539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:11.376724 | orchestrator | Monday 19 May 2025 14:31:11 +0000 (0:00:00.235) 0:00:00.689 ************ 2025-05-19 14:31:11.692442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 14:31:11.693381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 14:31:11.694902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 14:31:11.697477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 14:31:11.698298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 14:31:11.698881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 14:31:11.699812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 14:31:11.700952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 14:31:11.703528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 14:31:11.703797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 14:31:11.704910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 14:31:11.706183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 14:31:11.706621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 14:31:11.709529 | orchestrator | 2025-05-19 14:31:11.710154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:11.711368 | orchestrator | Monday 19 May 2025 14:31:11 +0000 (0:00:00.317) 0:00:01.007 ************ 2025-05-19 14:31:12.073259 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.076280 | orchestrator | 2025-05-19 14:31:12.076308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.078434 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.381) 0:00:01.389 ************ 2025-05-19 14:31:12.253826 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.253907 | orchestrator | 2025-05-19 14:31:12.253920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.253932 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.175) 0:00:01.564 ************ 2025-05-19 14:31:12.417057 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.421788 | orchestrator | 2025-05-19 14:31:12.422202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.422259 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.166) 0:00:01.730 ************ 2025-05-19 14:31:12.590218 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.591473 | orchestrator | 2025-05-19 14:31:12.591713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.592128 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.176) 0:00:01.906 ************ 2025-05-19 14:31:12.757047 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.757804 | orchestrator | 2025-05-19 14:31:12.758144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.758758 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.166) 0:00:02.073 ************ 2025-05-19 14:31:12.921276 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:12.921383 | orchestrator | 2025-05-19 14:31:12.921532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:12.923296 | orchestrator | Monday 19 May 2025 14:31:12 +0000 (0:00:00.161) 0:00:02.235 ************ 2025-05-19 14:31:13.093153 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:13.093241 | orchestrator | 2025-05-19 14:31:13.094130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:13.094480 | orchestrator | Monday 19 May 2025 14:31:13 +0000 (0:00:00.173) 0:00:02.409 ************ 2025-05-19 14:31:13.293839 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:13.297992 | orchestrator | 2025-05-19 14:31:13.300161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:13.300189 | orchestrator | Monday 19 May 2025 14:31:13 +0000 (0:00:00.198) 0:00:02.607 ************ 2025-05-19 14:31:13.652931 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484) 2025-05-19 14:31:13.653101 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484) 2025-05-19 14:31:13.653616 | orchestrator | 2025-05-19 14:31:13.654440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:13.655845 | orchestrator | Monday 19 May 2025 14:31:13 +0000 (0:00:00.356) 0:00:02.964 ************ 2025-05-19 14:31:14.013025 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0) 2025-05-19 14:31:14.013889 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0) 2025-05-19 14:31:14.014303 | orchestrator | 2025-05-19 14:31:14.015100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:14.015667 | orchestrator | Monday 19 May 2025 14:31:14 +0000 (0:00:00.364) 0:00:03.328 ************ 2025-05-19 14:31:14.513455 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2) 2025-05-19 14:31:14.516343 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2) 2025-05-19 14:31:14.516403 | orchestrator | 2025-05-19 14:31:14.516417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:14.516428 | orchestrator | Monday 19 May 2025 14:31:14 +0000 (0:00:00.500) 0:00:03.828 ************ 2025-05-19 14:31:15.022184 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809) 2025-05-19 14:31:15.022333 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809) 2025-05-19 14:31:15.023230 | orchestrator | 2025-05-19 14:31:15.023728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:15.024255 | orchestrator | Monday 19 May 2025 14:31:15 +0000 (0:00:00.507) 0:00:04.336 ************ 2025-05-19 14:31:15.874316 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:31:15.875177 | orchestrator | 2025-05-19 14:31:15.878496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:15.878525 | orchestrator | Monday 19 May 2025 14:31:15 +0000 (0:00:00.851) 0:00:05.187 ************ 2025-05-19 14:31:16.292714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 14:31:16.294752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 14:31:16.299850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 14:31:16.299960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 14:31:16.299986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 14:31:16.300006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 14:31:16.300837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 14:31:16.301945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 14:31:16.303266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 14:31:16.304263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 14:31:16.305070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 14:31:16.305981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 14:31:16.307554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 14:31:16.308002 | orchestrator | 2025-05-19 14:31:16.308755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:16.309095 | orchestrator | Monday 19 May 2025 14:31:16 +0000 (0:00:00.418) 0:00:05.606 ************ 2025-05-19 14:31:16.504940 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:16.507869 | orchestrator | 2025-05-19 14:31:16.507909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:16.507923 | orchestrator | Monday 19 May 2025 14:31:16 +0000 (0:00:00.209) 0:00:05.815 ************ 2025-05-19 14:31:16.690377 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:16.691987 | orchestrator | 2025-05-19 14:31:16.692941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:16.693791 | orchestrator | Monday 19 May 2025 14:31:16 +0000 (0:00:00.185) 0:00:06.001 ************ 2025-05-19 14:31:16.900005 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:16.900516 | orchestrator | 2025-05-19 14:31:16.902198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:16.903333 | orchestrator | Monday 19 May 2025 14:31:16 +0000 (0:00:00.213) 0:00:06.214 ************ 2025-05-19 14:31:17.095313 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:17.095922 | orchestrator | 2025-05-19 14:31:17.099966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:17.100456 | orchestrator | Monday 19 May 2025 14:31:17 +0000 (0:00:00.194) 0:00:06.409 ************ 2025-05-19 14:31:17.296170 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:17.296400 | orchestrator | 2025-05-19 14:31:17.296866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:17.298169 | orchestrator | Monday 19 May 2025 14:31:17 +0000 (0:00:00.199) 0:00:06.608 ************ 2025-05-19 14:31:17.550793 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:17.552835 | orchestrator | 2025-05-19 14:31:17.553218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:17.553827 | orchestrator | Monday 19 May 2025 14:31:17 +0000 (0:00:00.255) 0:00:06.864 ************ 2025-05-19 14:31:17.771155 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:17.772120 | orchestrator | 2025-05-19 14:31:17.773002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:17.774290 | orchestrator | Monday 19 May 2025 14:31:17 +0000 (0:00:00.219) 0:00:07.083 ************ 2025-05-19 14:31:18.001838 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:18.003644 | orchestrator | 2025-05-19 14:31:18.003699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:18.003844 | orchestrator | Monday 19 May 2025 14:31:17 +0000 (0:00:00.228) 0:00:07.312 ************ 2025-05-19 14:31:19.116716 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 14:31:19.116825 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 14:31:19.117653 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 14:31:19.119267 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 14:31:19.120987 | orchestrator | 2025-05-19 14:31:19.121801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:19.122074 | orchestrator | Monday 19 May 2025 14:31:19 +0000 (0:00:01.111) 0:00:08.424 ************ 2025-05-19 14:31:19.335026 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:19.335128 | orchestrator | 2025-05-19 14:31:19.336793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:19.337106 | orchestrator | Monday 19 May 2025 14:31:19 +0000 (0:00:00.223) 0:00:08.648 ************ 2025-05-19 14:31:19.604474 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:19.606974 | orchestrator | 2025-05-19 14:31:19.609968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:19.611393 | orchestrator | Monday 19 May 2025 14:31:19 +0000 (0:00:00.268) 0:00:08.916 ************ 2025-05-19 14:31:19.853672 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:19.855793 | orchestrator | 2025-05-19 14:31:19.856213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:19.857547 | orchestrator | Monday 19 May 2025 14:31:19 +0000 (0:00:00.252) 0:00:09.168 ************ 2025-05-19 14:31:20.055409 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:20.056574 | orchestrator | 2025-05-19 14:31:20.057061 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 14:31:20.057963 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.202) 0:00:09.370 ************ 2025-05-19 14:31:20.233676 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-19 14:31:20.234355 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-19 14:31:20.234688 | orchestrator | 2025-05-19 14:31:20.235897 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 14:31:20.239822 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.179) 0:00:09.550 ************ 2025-05-19 14:31:20.370735 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:20.370819 | orchestrator | 2025-05-19 14:31:20.371342 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 14:31:20.372085 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.136) 0:00:09.687 ************ 2025-05-19 14:31:20.484238 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:20.488842 | orchestrator | 2025-05-19 14:31:20.489390 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 14:31:20.490268 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.111) 0:00:09.798 ************ 2025-05-19 14:31:20.591931 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:20.592465 | orchestrator | 2025-05-19 14:31:20.595624 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 14:31:20.596724 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.109) 0:00:09.907 ************ 2025-05-19 14:31:20.725415 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:31:20.726762 | orchestrator | 2025-05-19 14:31:20.730687 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 14:31:20.732717 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.133) 0:00:10.041 ************ 2025-05-19 14:31:20.888988 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f79a0596-c901-5dda-8c3d-7673c0794e9f'}}) 2025-05-19 14:31:20.892419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be132d09-93e5-58e2-99ec-48d3b83dc2dd'}}) 2025-05-19 14:31:20.892469 | orchestrator | 2025-05-19 14:31:20.892481 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 14:31:20.892492 | orchestrator | Monday 19 May 2025 14:31:20 +0000 (0:00:00.161) 0:00:10.202 ************ 2025-05-19 14:31:21.032269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f79a0596-c901-5dda-8c3d-7673c0794e9f'}})  2025-05-19 14:31:21.032694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be132d09-93e5-58e2-99ec-48d3b83dc2dd'}})  2025-05-19 14:31:21.034184 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:21.034551 | orchestrator | 2025-05-19 14:31:21.037472 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 14:31:21.038668 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.146) 0:00:10.349 ************ 2025-05-19 14:31:21.309970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f79a0596-c901-5dda-8c3d-7673c0794e9f'}})  2025-05-19 14:31:21.310309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be132d09-93e5-58e2-99ec-48d3b83dc2dd'}})  2025-05-19 14:31:21.311759 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:21.312646 | orchestrator | 2025-05-19 14:31:21.313289 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 14:31:21.315834 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.278) 0:00:10.627 ************ 2025-05-19 14:31:21.454078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f79a0596-c901-5dda-8c3d-7673c0794e9f'}})  2025-05-19 14:31:21.456845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be132d09-93e5-58e2-99ec-48d3b83dc2dd'}})  2025-05-19 14:31:21.456874 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:21.456887 | orchestrator | 2025-05-19 14:31:21.456899 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 14:31:21.456911 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.141) 0:00:10.768 ************ 2025-05-19 14:31:21.592973 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:31:21.594078 | orchestrator | 2025-05-19 14:31:21.596422 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 14:31:21.596458 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.139) 0:00:10.908 ************ 2025-05-19 14:31:21.733513 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:31:21.734095 | orchestrator | 2025-05-19 14:31:21.734303 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 14:31:21.734704 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.139) 0:00:11.047 ************ 2025-05-19 14:31:21.837703 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:21.838520 | orchestrator | 2025-05-19 14:31:21.839030 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 14:31:21.839684 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.105) 0:00:11.153 ************ 2025-05-19 14:31:21.947256 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:21.949708 | orchestrator | 2025-05-19 14:31:21.955358 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 14:31:21.955404 | orchestrator | Monday 19 May 2025 14:31:21 +0000 (0:00:00.109) 0:00:11.262 ************ 2025-05-19 14:31:22.049696 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:22.049762 | orchestrator | 2025-05-19 14:31:22.050916 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 14:31:22.051740 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.102) 0:00:11.364 ************ 2025-05-19 14:31:22.189303 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:31:22.189665 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:22.190273 | orchestrator |  "sdb": { 2025-05-19 14:31:22.190821 | orchestrator |  "osd_lvm_uuid": "f79a0596-c901-5dda-8c3d-7673c0794e9f" 2025-05-19 14:31:22.193692 | orchestrator |  }, 2025-05-19 14:31:22.194160 | orchestrator |  "sdc": { 2025-05-19 14:31:22.194535 | orchestrator |  "osd_lvm_uuid": "be132d09-93e5-58e2-99ec-48d3b83dc2dd" 2025-05-19 14:31:22.194917 | orchestrator |  } 2025-05-19 14:31:22.195259 | orchestrator |  } 2025-05-19 14:31:22.195533 | orchestrator | } 2025-05-19 14:31:22.195833 | orchestrator | 2025-05-19 14:31:22.196535 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 14:31:22.196829 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.140) 0:00:11.505 ************ 2025-05-19 14:31:22.303657 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:22.303735 | orchestrator | 2025-05-19 14:31:22.303830 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 14:31:22.304117 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.113) 0:00:11.619 ************ 2025-05-19 14:31:22.428321 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:22.428512 | orchestrator | 2025-05-19 14:31:22.429624 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 14:31:22.429679 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.125) 0:00:11.744 ************ 2025-05-19 14:31:22.537767 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:31:22.540321 | orchestrator | 2025-05-19 14:31:22.542091 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 14:31:22.543158 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.107) 0:00:11.851 ************ 2025-05-19 14:31:22.728482 | orchestrator | changed: [testbed-node-3] => { 2025-05-19 14:31:22.728603 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 14:31:22.728792 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:22.732352 | orchestrator |  "sdb": { 2025-05-19 14:31:22.732948 | orchestrator |  "osd_lvm_uuid": "f79a0596-c901-5dda-8c3d-7673c0794e9f" 2025-05-19 14:31:22.733399 | orchestrator |  }, 2025-05-19 14:31:22.733920 | orchestrator |  "sdc": { 2025-05-19 14:31:22.734543 | orchestrator |  "osd_lvm_uuid": "be132d09-93e5-58e2-99ec-48d3b83dc2dd" 2025-05-19 14:31:22.735206 | orchestrator |  } 2025-05-19 14:31:22.736728 | orchestrator |  }, 2025-05-19 14:31:22.736739 | orchestrator |  "lvm_volumes": [ 2025-05-19 14:31:22.736743 | orchestrator |  { 2025-05-19 14:31:22.737419 | orchestrator |  "data": "osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f", 2025-05-19 14:31:22.737782 | orchestrator |  "data_vg": "ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f" 2025-05-19 14:31:22.738717 | orchestrator |  }, 2025-05-19 14:31:22.738769 | orchestrator |  { 2025-05-19 14:31:22.739170 | orchestrator |  "data": "osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd", 2025-05-19 14:31:22.739701 | orchestrator |  "data_vg": "ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd" 2025-05-19 14:31:22.740218 | orchestrator |  } 2025-05-19 14:31:22.740798 | orchestrator |  ] 2025-05-19 14:31:22.741352 | orchestrator |  } 2025-05-19 14:31:22.741807 | orchestrator | } 2025-05-19 14:31:22.742248 | orchestrator | 2025-05-19 14:31:22.742641 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 14:31:22.743092 | orchestrator | Monday 19 May 2025 14:31:22 +0000 (0:00:00.191) 0:00:12.043 ************ 2025-05-19 14:31:24.535500 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:24.537497 | orchestrator | 2025-05-19 14:31:24.541012 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 14:31:24.541054 | orchestrator | 2025-05-19 14:31:24.544390 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:31:24.544437 | orchestrator | Monday 19 May 2025 14:31:24 +0000 (0:00:01.807) 0:00:13.851 ************ 2025-05-19 14:31:24.806172 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:24.806274 | orchestrator | 2025-05-19 14:31:24.806384 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:31:24.806494 | orchestrator | Monday 19 May 2025 14:31:24 +0000 (0:00:00.270) 0:00:14.121 ************ 2025-05-19 14:31:25.018706 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:31:25.019758 | orchestrator | 2025-05-19 14:31:25.020928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.021629 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.213) 0:00:14.334 ************ 2025-05-19 14:31:25.283954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 14:31:25.286416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 14:31:25.286670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 14:31:25.289759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 14:31:25.289780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 14:31:25.290381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 14:31:25.291136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 14:31:25.291480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 14:31:25.292316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 14:31:25.293311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 14:31:25.293580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 14:31:25.293985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 14:31:25.294545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 14:31:25.296369 | orchestrator | 2025-05-19 14:31:25.296630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.296967 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.265) 0:00:14.600 ************ 2025-05-19 14:31:25.467715 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:25.468640 | orchestrator | 2025-05-19 14:31:25.469281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.469776 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.183) 0:00:14.783 ************ 2025-05-19 14:31:25.629547 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:25.629681 | orchestrator | 2025-05-19 14:31:25.629695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.629725 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.158) 0:00:14.942 ************ 2025-05-19 14:31:25.787758 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:25.787911 | orchestrator | 2025-05-19 14:31:25.788797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.789625 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.160) 0:00:15.103 ************ 2025-05-19 14:31:25.952156 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:25.952323 | orchestrator | 2025-05-19 14:31:25.952758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:25.953737 | orchestrator | Monday 19 May 2025 14:31:25 +0000 (0:00:00.164) 0:00:15.268 ************ 2025-05-19 14:31:26.512491 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:26.513029 | orchestrator | 2025-05-19 14:31:26.514801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:26.515849 | orchestrator | Monday 19 May 2025 14:31:26 +0000 (0:00:00.558) 0:00:15.826 ************ 2025-05-19 14:31:26.710736 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:26.712768 | orchestrator | 2025-05-19 14:31:26.713617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:26.716268 | orchestrator | Monday 19 May 2025 14:31:26 +0000 (0:00:00.196) 0:00:16.023 ************ 2025-05-19 14:31:26.938288 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:26.938851 | orchestrator | 2025-05-19 14:31:26.940224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:26.940549 | orchestrator | Monday 19 May 2025 14:31:26 +0000 (0:00:00.230) 0:00:16.253 ************ 2025-05-19 14:31:27.163257 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:27.163960 | orchestrator | 2025-05-19 14:31:27.164942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:27.165927 | orchestrator | Monday 19 May 2025 14:31:27 +0000 (0:00:00.224) 0:00:16.478 ************ 2025-05-19 14:31:27.634664 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e) 2025-05-19 14:31:27.634741 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e) 2025-05-19 14:31:27.635548 | orchestrator | 2025-05-19 14:31:27.636142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:27.636943 | orchestrator | Monday 19 May 2025 14:31:27 +0000 (0:00:00.470) 0:00:16.948 ************ 2025-05-19 14:31:28.064355 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538) 2025-05-19 14:31:28.068062 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538) 2025-05-19 14:31:28.071894 | orchestrator | 2025-05-19 14:31:28.072780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:28.076704 | orchestrator | Monday 19 May 2025 14:31:28 +0000 (0:00:00.430) 0:00:17.379 ************ 2025-05-19 14:31:28.571155 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964) 2025-05-19 14:31:28.571282 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964) 2025-05-19 14:31:28.573100 | orchestrator | 2025-05-19 14:31:28.573382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:28.575344 | orchestrator | Monday 19 May 2025 14:31:28 +0000 (0:00:00.504) 0:00:17.883 ************ 2025-05-19 14:31:29.269937 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a) 2025-05-19 14:31:29.270203 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a) 2025-05-19 14:31:29.270278 | orchestrator | 2025-05-19 14:31:29.270403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:29.270524 | orchestrator | Monday 19 May 2025 14:31:29 +0000 (0:00:00.697) 0:00:18.581 ************ 2025-05-19 14:31:29.790697 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:31:29.792001 | orchestrator | 2025-05-19 14:31:29.792308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:29.793032 | orchestrator | Monday 19 May 2025 14:31:29 +0000 (0:00:00.519) 0:00:19.101 ************ 2025-05-19 14:31:30.178001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 14:31:30.178685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 14:31:30.179299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 14:31:30.181109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 14:31:30.181687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 14:31:30.182103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 14:31:30.182790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 14:31:30.183312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 14:31:30.183734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 14:31:30.187673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 14:31:30.187776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 14:31:30.187916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 14:31:30.188363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 14:31:30.188763 | orchestrator | 2025-05-19 14:31:30.189193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:30.189654 | orchestrator | Monday 19 May 2025 14:31:30 +0000 (0:00:00.393) 0:00:19.494 ************ 2025-05-19 14:31:30.402744 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:30.402915 | orchestrator | 2025-05-19 14:31:30.405280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:30.405954 | orchestrator | Monday 19 May 2025 14:31:30 +0000 (0:00:00.222) 0:00:19.716 ************ 2025-05-19 14:31:31.145964 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:31.150595 | orchestrator | 2025-05-19 14:31:31.151205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:31.154403 | orchestrator | Monday 19 May 2025 14:31:31 +0000 (0:00:00.745) 0:00:20.462 ************ 2025-05-19 14:31:31.341166 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:31.344668 | orchestrator | 2025-05-19 14:31:31.344786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:31.346360 | orchestrator | Monday 19 May 2025 14:31:31 +0000 (0:00:00.195) 0:00:20.658 ************ 2025-05-19 14:31:31.540440 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:31.541373 | orchestrator | 2025-05-19 14:31:31.541702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:31.544247 | orchestrator | Monday 19 May 2025 14:31:31 +0000 (0:00:00.196) 0:00:20.854 ************ 2025-05-19 14:31:31.727311 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:31.728389 | orchestrator | 2025-05-19 14:31:31.729203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:31.729928 | orchestrator | Monday 19 May 2025 14:31:31 +0000 (0:00:00.187) 0:00:21.042 ************ 2025-05-19 14:31:31.936762 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:31.941894 | orchestrator | 2025-05-19 14:31:31.941951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:31.942109 | orchestrator | Monday 19 May 2025 14:31:31 +0000 (0:00:00.211) 0:00:21.253 ************ 2025-05-19 14:31:32.129671 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:32.129864 | orchestrator | 2025-05-19 14:31:32.131890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:32.132222 | orchestrator | Monday 19 May 2025 14:31:32 +0000 (0:00:00.190) 0:00:21.443 ************ 2025-05-19 14:31:32.308244 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:32.308349 | orchestrator | 2025-05-19 14:31:32.308449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:32.308710 | orchestrator | Monday 19 May 2025 14:31:32 +0000 (0:00:00.176) 0:00:21.619 ************ 2025-05-19 14:31:32.901023 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 14:31:32.904954 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 14:31:32.904987 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 14:31:32.905339 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 14:31:32.905907 | orchestrator | 2025-05-19 14:31:32.906147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:32.906367 | orchestrator | Monday 19 May 2025 14:31:32 +0000 (0:00:00.599) 0:00:22.218 ************ 2025-05-19 14:31:33.048721 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.048832 | orchestrator | 2025-05-19 14:31:33.049026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:33.049293 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.146) 0:00:22.364 ************ 2025-05-19 14:31:33.209708 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.209888 | orchestrator | 2025-05-19 14:31:33.211482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:33.211873 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.161) 0:00:22.526 ************ 2025-05-19 14:31:33.347113 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.348284 | orchestrator | 2025-05-19 14:31:33.352843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:33.352872 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.138) 0:00:22.664 ************ 2025-05-19 14:31:33.480357 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.484311 | orchestrator | 2025-05-19 14:31:33.484357 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 14:31:33.484434 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.132) 0:00:22.797 ************ 2025-05-19 14:31:33.741539 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-19 14:31:33.744684 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-19 14:31:33.749026 | orchestrator | 2025-05-19 14:31:33.751752 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 14:31:33.751865 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.259) 0:00:23.056 ************ 2025-05-19 14:31:33.866842 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.868470 | orchestrator | 2025-05-19 14:31:33.868881 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 14:31:33.869332 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.125) 0:00:23.182 ************ 2025-05-19 14:31:33.981518 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:33.981878 | orchestrator | 2025-05-19 14:31:33.982690 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 14:31:33.984440 | orchestrator | Monday 19 May 2025 14:31:33 +0000 (0:00:00.111) 0:00:23.293 ************ 2025-05-19 14:31:34.091284 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:34.091365 | orchestrator | 2025-05-19 14:31:34.092542 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 14:31:34.095543 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.111) 0:00:23.405 ************ 2025-05-19 14:31:34.227311 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:31:34.227548 | orchestrator | 2025-05-19 14:31:34.228534 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 14:31:34.229300 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.138) 0:00:23.544 ************ 2025-05-19 14:31:34.366284 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14b77220-8a02-5c14-b369-aaa75d02e7a5'}}) 2025-05-19 14:31:34.367429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28da045-49d6-58b1-95f0-26301c413660'}}) 2025-05-19 14:31:34.371608 | orchestrator | 2025-05-19 14:31:34.372902 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 14:31:34.374232 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.137) 0:00:23.682 ************ 2025-05-19 14:31:34.490214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14b77220-8a02-5c14-b369-aaa75d02e7a5'}})  2025-05-19 14:31:34.491057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28da045-49d6-58b1-95f0-26301c413660'}})  2025-05-19 14:31:34.491495 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:34.491710 | orchestrator | 2025-05-19 14:31:34.492016 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 14:31:34.492253 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.124) 0:00:23.806 ************ 2025-05-19 14:31:34.611418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14b77220-8a02-5c14-b369-aaa75d02e7a5'}})  2025-05-19 14:31:34.612453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28da045-49d6-58b1-95f0-26301c413660'}})  2025-05-19 14:31:34.612483 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:34.613204 | orchestrator | 2025-05-19 14:31:34.613461 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 14:31:34.613805 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.122) 0:00:23.928 ************ 2025-05-19 14:31:34.758412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14b77220-8a02-5c14-b369-aaa75d02e7a5'}})  2025-05-19 14:31:34.763751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28da045-49d6-58b1-95f0-26301c413660'}})  2025-05-19 14:31:34.763789 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:34.764668 | orchestrator | 2025-05-19 14:31:34.766123 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 14:31:34.766526 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.143) 0:00:24.072 ************ 2025-05-19 14:31:34.902824 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:31:34.902892 | orchestrator | 2025-05-19 14:31:34.902904 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 14:31:34.902916 | orchestrator | Monday 19 May 2025 14:31:34 +0000 (0:00:00.142) 0:00:24.214 ************ 2025-05-19 14:31:35.021191 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:31:35.022369 | orchestrator | 2025-05-19 14:31:35.025328 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 14:31:35.025603 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.121) 0:00:24.336 ************ 2025-05-19 14:31:35.135849 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.136065 | orchestrator | 2025-05-19 14:31:35.136676 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 14:31:35.137692 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.116) 0:00:24.453 ************ 2025-05-19 14:31:35.376703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.378203 | orchestrator | 2025-05-19 14:31:35.379306 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 14:31:35.380286 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.239) 0:00:24.693 ************ 2025-05-19 14:31:35.473545 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.474101 | orchestrator | 2025-05-19 14:31:35.476505 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 14:31:35.476956 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.097) 0:00:24.790 ************ 2025-05-19 14:31:35.603411 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:31:35.603505 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:35.604131 | orchestrator |  "sdb": { 2025-05-19 14:31:35.604794 | orchestrator |  "osd_lvm_uuid": "14b77220-8a02-5c14-b369-aaa75d02e7a5" 2025-05-19 14:31:35.605438 | orchestrator |  }, 2025-05-19 14:31:35.606542 | orchestrator |  "sdc": { 2025-05-19 14:31:35.607122 | orchestrator |  "osd_lvm_uuid": "d28da045-49d6-58b1-95f0-26301c413660" 2025-05-19 14:31:35.608637 | orchestrator |  } 2025-05-19 14:31:35.609332 | orchestrator |  } 2025-05-19 14:31:35.609778 | orchestrator | } 2025-05-19 14:31:35.610143 | orchestrator | 2025-05-19 14:31:35.610803 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 14:31:35.610915 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.126) 0:00:24.917 ************ 2025-05-19 14:31:35.710669 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.712162 | orchestrator | 2025-05-19 14:31:35.714306 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 14:31:35.714986 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.110) 0:00:25.027 ************ 2025-05-19 14:31:35.833423 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.834224 | orchestrator | 2025-05-19 14:31:35.835158 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 14:31:35.835904 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.120) 0:00:25.148 ************ 2025-05-19 14:31:35.953519 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:31:35.956199 | orchestrator | 2025-05-19 14:31:35.956232 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 14:31:35.956245 | orchestrator | Monday 19 May 2025 14:31:35 +0000 (0:00:00.121) 0:00:25.269 ************ 2025-05-19 14:31:36.168272 | orchestrator | changed: [testbed-node-4] => { 2025-05-19 14:31:36.169260 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 14:31:36.172359 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:36.173168 | orchestrator |  "sdb": { 2025-05-19 14:31:36.173942 | orchestrator |  "osd_lvm_uuid": "14b77220-8a02-5c14-b369-aaa75d02e7a5" 2025-05-19 14:31:36.174296 | orchestrator |  }, 2025-05-19 14:31:36.175986 | orchestrator |  "sdc": { 2025-05-19 14:31:36.176771 | orchestrator |  "osd_lvm_uuid": "d28da045-49d6-58b1-95f0-26301c413660" 2025-05-19 14:31:36.178331 | orchestrator |  } 2025-05-19 14:31:36.178936 | orchestrator |  }, 2025-05-19 14:31:36.179595 | orchestrator |  "lvm_volumes": [ 2025-05-19 14:31:36.180369 | orchestrator |  { 2025-05-19 14:31:36.182174 | orchestrator |  "data": "osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5", 2025-05-19 14:31:36.185950 | orchestrator |  "data_vg": "ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5" 2025-05-19 14:31:36.186315 | orchestrator |  }, 2025-05-19 14:31:36.186982 | orchestrator |  { 2025-05-19 14:31:36.187155 | orchestrator |  "data": "osd-block-d28da045-49d6-58b1-95f0-26301c413660", 2025-05-19 14:31:36.188297 | orchestrator |  "data_vg": "ceph-d28da045-49d6-58b1-95f0-26301c413660" 2025-05-19 14:31:36.188662 | orchestrator |  } 2025-05-19 14:31:36.189021 | orchestrator |  ] 2025-05-19 14:31:36.189372 | orchestrator |  } 2025-05-19 14:31:36.189680 | orchestrator | } 2025-05-19 14:31:36.190374 | orchestrator | 2025-05-19 14:31:36.191251 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 14:31:36.192988 | orchestrator | Monday 19 May 2025 14:31:36 +0000 (0:00:00.213) 0:00:25.483 ************ 2025-05-19 14:31:37.186424 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:37.186588 | orchestrator | 2025-05-19 14:31:37.187353 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 14:31:37.189602 | orchestrator | 2025-05-19 14:31:37.190126 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:31:37.191994 | orchestrator | Monday 19 May 2025 14:31:37 +0000 (0:00:01.017) 0:00:26.500 ************ 2025-05-19 14:31:37.550687 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:37.551120 | orchestrator | 2025-05-19 14:31:37.551411 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:31:37.552103 | orchestrator | Monday 19 May 2025 14:31:37 +0000 (0:00:00.366) 0:00:26.867 ************ 2025-05-19 14:31:38.071462 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:31:38.075255 | orchestrator | 2025-05-19 14:31:38.075709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:38.076290 | orchestrator | Monday 19 May 2025 14:31:38 +0000 (0:00:00.517) 0:00:27.385 ************ 2025-05-19 14:31:38.436277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 14:31:38.436480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 14:31:38.436758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 14:31:38.436952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 14:31:38.437278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 14:31:38.440484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 14:31:38.442122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 14:31:38.442146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 14:31:38.442157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 14:31:38.442168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 14:31:38.442178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 14:31:38.442189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 14:31:38.442200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 14:31:38.442471 | orchestrator | 2025-05-19 14:31:38.443004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:38.445536 | orchestrator | Monday 19 May 2025 14:31:38 +0000 (0:00:00.368) 0:00:27.753 ************ 2025-05-19 14:31:38.624788 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:38.626251 | orchestrator | 2025-05-19 14:31:38.627489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:38.630097 | orchestrator | Monday 19 May 2025 14:31:38 +0000 (0:00:00.187) 0:00:27.940 ************ 2025-05-19 14:31:38.822396 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:38.824159 | orchestrator | 2025-05-19 14:31:38.825070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:38.826699 | orchestrator | Monday 19 May 2025 14:31:38 +0000 (0:00:00.198) 0:00:28.139 ************ 2025-05-19 14:31:39.007356 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.007438 | orchestrator | 2025-05-19 14:31:39.010760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.011268 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.181) 0:00:28.320 ************ 2025-05-19 14:31:39.189849 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.191920 | orchestrator | 2025-05-19 14:31:39.191991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.196666 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.185) 0:00:28.506 ************ 2025-05-19 14:31:39.384814 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.385349 | orchestrator | 2025-05-19 14:31:39.385761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.386125 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.195) 0:00:28.701 ************ 2025-05-19 14:31:39.564771 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.564959 | orchestrator | 2025-05-19 14:31:39.564981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.567114 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.178) 0:00:28.880 ************ 2025-05-19 14:31:39.719769 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.722429 | orchestrator | 2025-05-19 14:31:39.722607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.723255 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.156) 0:00:29.036 ************ 2025-05-19 14:31:39.902928 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:39.903794 | orchestrator | 2025-05-19 14:31:39.904078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:39.906983 | orchestrator | Monday 19 May 2025 14:31:39 +0000 (0:00:00.183) 0:00:29.220 ************ 2025-05-19 14:31:40.461911 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4) 2025-05-19 14:31:40.462995 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4) 2025-05-19 14:31:40.463773 | orchestrator | 2025-05-19 14:31:40.464753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:40.465467 | orchestrator | Monday 19 May 2025 14:31:40 +0000 (0:00:00.557) 0:00:29.777 ************ 2025-05-19 14:31:41.093858 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834) 2025-05-19 14:31:41.098447 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834) 2025-05-19 14:31:41.098509 | orchestrator | 2025-05-19 14:31:41.098825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:41.099964 | orchestrator | Monday 19 May 2025 14:31:41 +0000 (0:00:00.631) 0:00:30.409 ************ 2025-05-19 14:31:41.507419 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738) 2025-05-19 14:31:41.508328 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738) 2025-05-19 14:31:41.509245 | orchestrator | 2025-05-19 14:31:41.509878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:41.511168 | orchestrator | Monday 19 May 2025 14:31:41 +0000 (0:00:00.414) 0:00:30.824 ************ 2025-05-19 14:31:41.905356 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb) 2025-05-19 14:31:41.905446 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb) 2025-05-19 14:31:41.905461 | orchestrator | 2025-05-19 14:31:41.905474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:31:41.905486 | orchestrator | Monday 19 May 2025 14:31:41 +0000 (0:00:00.393) 0:00:31.217 ************ 2025-05-19 14:31:42.172973 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:31:42.174702 | orchestrator | 2025-05-19 14:31:42.174734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:42.175907 | orchestrator | Monday 19 May 2025 14:31:42 +0000 (0:00:00.271) 0:00:31.488 ************ 2025-05-19 14:31:42.495087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 14:31:42.495900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 14:31:42.499882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 14:31:42.500168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 14:31:42.501287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 14:31:42.501992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 14:31:42.502961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 14:31:42.503778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 14:31:42.504419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 14:31:42.505511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 14:31:42.506928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 14:31:42.507729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 14:31:42.509471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 14:31:42.510899 | orchestrator | 2025-05-19 14:31:42.512034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:42.512955 | orchestrator | Monday 19 May 2025 14:31:42 +0000 (0:00:00.321) 0:00:31.810 ************ 2025-05-19 14:31:42.673478 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:42.678181 | orchestrator | 2025-05-19 14:31:42.679593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:42.680739 | orchestrator | Monday 19 May 2025 14:31:42 +0000 (0:00:00.178) 0:00:31.988 ************ 2025-05-19 14:31:42.857148 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:42.857855 | orchestrator | 2025-05-19 14:31:42.861254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:42.861932 | orchestrator | Monday 19 May 2025 14:31:42 +0000 (0:00:00.183) 0:00:32.172 ************ 2025-05-19 14:31:43.033867 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:43.033944 | orchestrator | 2025-05-19 14:31:43.035448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:43.038470 | orchestrator | Monday 19 May 2025 14:31:43 +0000 (0:00:00.174) 0:00:32.346 ************ 2025-05-19 14:31:43.221150 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:43.225447 | orchestrator | 2025-05-19 14:31:43.225488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:43.226514 | orchestrator | Monday 19 May 2025 14:31:43 +0000 (0:00:00.188) 0:00:32.535 ************ 2025-05-19 14:31:43.413102 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:43.413403 | orchestrator | 2025-05-19 14:31:43.413989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:43.414358 | orchestrator | Monday 19 May 2025 14:31:43 +0000 (0:00:00.190) 0:00:32.726 ************ 2025-05-19 14:31:43.866256 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:43.866409 | orchestrator | 2025-05-19 14:31:43.866702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:43.866920 | orchestrator | Monday 19 May 2025 14:31:43 +0000 (0:00:00.455) 0:00:33.182 ************ 2025-05-19 14:31:44.040615 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:44.040992 | orchestrator | 2025-05-19 14:31:44.042596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:44.043084 | orchestrator | Monday 19 May 2025 14:31:44 +0000 (0:00:00.173) 0:00:33.355 ************ 2025-05-19 14:31:44.223801 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:44.228204 | orchestrator | 2025-05-19 14:31:44.229014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:44.229524 | orchestrator | Monday 19 May 2025 14:31:44 +0000 (0:00:00.182) 0:00:33.538 ************ 2025-05-19 14:31:44.878917 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 14:31:44.880143 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 14:31:44.880461 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 14:31:44.881650 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 14:31:44.882129 | orchestrator | 2025-05-19 14:31:44.882511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:44.883253 | orchestrator | Monday 19 May 2025 14:31:44 +0000 (0:00:00.644) 0:00:34.183 ************ 2025-05-19 14:31:45.083935 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:45.084148 | orchestrator | 2025-05-19 14:31:45.085816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:45.087064 | orchestrator | Monday 19 May 2025 14:31:45 +0000 (0:00:00.212) 0:00:34.396 ************ 2025-05-19 14:31:45.269800 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:45.270921 | orchestrator | 2025-05-19 14:31:45.271976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:45.272688 | orchestrator | Monday 19 May 2025 14:31:45 +0000 (0:00:00.187) 0:00:34.583 ************ 2025-05-19 14:31:45.486713 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:45.488156 | orchestrator | 2025-05-19 14:31:45.489757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:31:45.491447 | orchestrator | Monday 19 May 2025 14:31:45 +0000 (0:00:00.217) 0:00:34.800 ************ 2025-05-19 14:31:45.680137 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:45.680701 | orchestrator | 2025-05-19 14:31:45.681390 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 14:31:45.682144 | orchestrator | Monday 19 May 2025 14:31:45 +0000 (0:00:00.194) 0:00:34.995 ************ 2025-05-19 14:31:45.894101 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-19 14:31:45.895196 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-19 14:31:45.896740 | orchestrator | 2025-05-19 14:31:45.897451 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 14:31:45.898424 | orchestrator | Monday 19 May 2025 14:31:45 +0000 (0:00:00.206) 0:00:35.201 ************ 2025-05-19 14:31:46.013895 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:46.015228 | orchestrator | 2025-05-19 14:31:46.016178 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 14:31:46.017229 | orchestrator | Monday 19 May 2025 14:31:46 +0000 (0:00:00.128) 0:00:35.329 ************ 2025-05-19 14:31:46.143615 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:46.143713 | orchestrator | 2025-05-19 14:31:46.144592 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 14:31:46.145423 | orchestrator | Monday 19 May 2025 14:31:46 +0000 (0:00:00.129) 0:00:35.458 ************ 2025-05-19 14:31:46.271679 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:46.272769 | orchestrator | 2025-05-19 14:31:46.274780 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 14:31:46.275899 | orchestrator | Monday 19 May 2025 14:31:46 +0000 (0:00:00.128) 0:00:35.587 ************ 2025-05-19 14:31:46.766077 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:31:46.766176 | orchestrator | 2025-05-19 14:31:46.766994 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 14:31:46.768419 | orchestrator | Monday 19 May 2025 14:31:46 +0000 (0:00:00.491) 0:00:36.078 ************ 2025-05-19 14:31:46.950969 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '18cd8a80-96d5-5946-80eb-7d63885b2b76'}}) 2025-05-19 14:31:46.951890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad566f4e-67fb-565a-8346-037c8100dc24'}}) 2025-05-19 14:31:46.952387 | orchestrator | 2025-05-19 14:31:46.953952 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 14:31:46.955996 | orchestrator | Monday 19 May 2025 14:31:46 +0000 (0:00:00.185) 0:00:36.264 ************ 2025-05-19 14:31:47.118355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '18cd8a80-96d5-5946-80eb-7d63885b2b76'}})  2025-05-19 14:31:47.119168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad566f4e-67fb-565a-8346-037c8100dc24'}})  2025-05-19 14:31:47.119727 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:47.120649 | orchestrator | 2025-05-19 14:31:47.122181 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 14:31:47.122765 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.169) 0:00:36.433 ************ 2025-05-19 14:31:47.268511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '18cd8a80-96d5-5946-80eb-7d63885b2b76'}})  2025-05-19 14:31:47.269154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad566f4e-67fb-565a-8346-037c8100dc24'}})  2025-05-19 14:31:47.269836 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:47.270452 | orchestrator | 2025-05-19 14:31:47.270920 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 14:31:47.271357 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.151) 0:00:36.584 ************ 2025-05-19 14:31:47.423720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '18cd8a80-96d5-5946-80eb-7d63885b2b76'}})  2025-05-19 14:31:47.424218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad566f4e-67fb-565a-8346-037c8100dc24'}})  2025-05-19 14:31:47.424759 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:47.425653 | orchestrator | 2025-05-19 14:31:47.428541 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 14:31:47.428923 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.147) 0:00:36.732 ************ 2025-05-19 14:31:47.551370 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:31:47.552170 | orchestrator | 2025-05-19 14:31:47.552219 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 14:31:47.552304 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.134) 0:00:36.867 ************ 2025-05-19 14:31:47.677960 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:31:47.678721 | orchestrator | 2025-05-19 14:31:47.679686 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 14:31:47.680623 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.126) 0:00:36.993 ************ 2025-05-19 14:31:47.819086 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:47.820614 | orchestrator | 2025-05-19 14:31:47.820849 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 14:31:47.823235 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.141) 0:00:37.134 ************ 2025-05-19 14:31:47.953028 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:47.953311 | orchestrator | 2025-05-19 14:31:47.954605 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 14:31:47.954827 | orchestrator | Monday 19 May 2025 14:31:47 +0000 (0:00:00.134) 0:00:37.268 ************ 2025-05-19 14:31:48.080119 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:48.081063 | orchestrator | 2025-05-19 14:31:48.082521 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 14:31:48.083577 | orchestrator | Monday 19 May 2025 14:31:48 +0000 (0:00:00.127) 0:00:37.396 ************ 2025-05-19 14:31:48.214535 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:31:48.215250 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:48.216357 | orchestrator |  "sdb": { 2025-05-19 14:31:48.217665 | orchestrator |  "osd_lvm_uuid": "18cd8a80-96d5-5946-80eb-7d63885b2b76" 2025-05-19 14:31:48.218959 | orchestrator |  }, 2025-05-19 14:31:48.219687 | orchestrator |  "sdc": { 2025-05-19 14:31:48.220595 | orchestrator |  "osd_lvm_uuid": "ad566f4e-67fb-565a-8346-037c8100dc24" 2025-05-19 14:31:48.221217 | orchestrator |  } 2025-05-19 14:31:48.221836 | orchestrator |  } 2025-05-19 14:31:48.222280 | orchestrator | } 2025-05-19 14:31:48.222788 | orchestrator | 2025-05-19 14:31:48.223474 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 14:31:48.224004 | orchestrator | Monday 19 May 2025 14:31:48 +0000 (0:00:00.133) 0:00:37.529 ************ 2025-05-19 14:31:48.331903 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:48.332070 | orchestrator | 2025-05-19 14:31:48.332359 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 14:31:48.333303 | orchestrator | Monday 19 May 2025 14:31:48 +0000 (0:00:00.117) 0:00:37.647 ************ 2025-05-19 14:31:48.641883 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:48.643954 | orchestrator | 2025-05-19 14:31:48.646688 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 14:31:48.648665 | orchestrator | Monday 19 May 2025 14:31:48 +0000 (0:00:00.310) 0:00:37.958 ************ 2025-05-19 14:31:48.780040 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:31:48.783936 | orchestrator | 2025-05-19 14:31:48.784938 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 14:31:48.787140 | orchestrator | Monday 19 May 2025 14:31:48 +0000 (0:00:00.137) 0:00:38.095 ************ 2025-05-19 14:31:49.005136 | orchestrator | changed: [testbed-node-5] => { 2025-05-19 14:31:49.005236 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 14:31:49.005496 | orchestrator |  "ceph_osd_devices": { 2025-05-19 14:31:49.008092 | orchestrator |  "sdb": { 2025-05-19 14:31:49.009085 | orchestrator |  "osd_lvm_uuid": "18cd8a80-96d5-5946-80eb-7d63885b2b76" 2025-05-19 14:31:49.009607 | orchestrator |  }, 2025-05-19 14:31:49.010278 | orchestrator |  "sdc": { 2025-05-19 14:31:49.010921 | orchestrator |  "osd_lvm_uuid": "ad566f4e-67fb-565a-8346-037c8100dc24" 2025-05-19 14:31:49.011341 | orchestrator |  } 2025-05-19 14:31:49.011713 | orchestrator |  }, 2025-05-19 14:31:49.012162 | orchestrator |  "lvm_volumes": [ 2025-05-19 14:31:49.012620 | orchestrator |  { 2025-05-19 14:31:49.013167 | orchestrator |  "data": "osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76", 2025-05-19 14:31:49.014084 | orchestrator |  "data_vg": "ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76" 2025-05-19 14:31:49.014530 | orchestrator |  }, 2025-05-19 14:31:49.014734 | orchestrator |  { 2025-05-19 14:31:49.015345 | orchestrator |  "data": "osd-block-ad566f4e-67fb-565a-8346-037c8100dc24", 2025-05-19 14:31:49.016614 | orchestrator |  "data_vg": "ceph-ad566f4e-67fb-565a-8346-037c8100dc24" 2025-05-19 14:31:49.016710 | orchestrator |  } 2025-05-19 14:31:49.016727 | orchestrator |  ] 2025-05-19 14:31:49.017356 | orchestrator |  } 2025-05-19 14:31:49.017684 | orchestrator | } 2025-05-19 14:31:49.018157 | orchestrator | 2025-05-19 14:31:49.018622 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 14:31:49.019100 | orchestrator | Monday 19 May 2025 14:31:49 +0000 (0:00:00.224) 0:00:38.319 ************ 2025-05-19 14:31:49.969536 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 14:31:49.970434 | orchestrator | 2025-05-19 14:31:49.973130 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:31:49.973192 | orchestrator | 2025-05-19 14:31:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:31:49.973600 | orchestrator | 2025-05-19 14:31:49 | INFO  | Please wait and do not abort execution. 2025-05-19 14:31:49.975202 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 14:31:49.976166 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 14:31:49.977433 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 14:31:49.979201 | orchestrator | 2025-05-19 14:31:49.981466 | orchestrator | 2025-05-19 14:31:49.982928 | orchestrator | 2025-05-19 14:31:49.984939 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:31:49.986274 | orchestrator | Monday 19 May 2025 14:31:49 +0000 (0:00:00.963) 0:00:39.283 ************ 2025-05-19 14:31:49.987002 | orchestrator | =============================================================================== 2025-05-19 14:31:49.987732 | orchestrator | Write configuration file ------------------------------------------------ 3.79s 2025-05-19 14:31:49.988845 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-05-19 14:31:49.990125 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-05-19 14:31:49.991460 | orchestrator | Get initial list of available block devices ----------------------------- 0.97s 2025-05-19 14:31:49.992698 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-05-19 14:31:49.994295 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-05-19 14:31:49.995428 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-05-19 14:31:49.997331 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.76s 2025-05-19 14:31:49.998124 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-05-19 14:31:49.998733 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-05-19 14:31:49.999351 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2025-05-19 14:31:50.000090 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-19 14:31:50.001021 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-19 14:31:50.001740 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-05-19 14:31:50.005082 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-19 14:31:50.005118 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-19 14:31:50.006450 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-19 14:31:50.008142 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2025-05-19 14:31:50.009039 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2025-05-19 14:31:50.010170 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-05-19 14:32:02.400940 | orchestrator | 2025-05-19 14:32:02 | INFO  | Task cb5aebe5-5031-4865-ba5d-8024f984623b (sync inventory) is running in background. Output coming soon. 2025-05-19 14:32:43.140346 | orchestrator | 2025-05-19 14:32:27 | INFO  | Starting group_vars file reorganization 2025-05-19 14:32:43.140456 | orchestrator | 2025-05-19 14:32:27 | INFO  | Moved 0 file(s) to their respective directories 2025-05-19 14:32:43.140470 | orchestrator | 2025-05-19 14:32:27 | INFO  | Group_vars file reorganization completed 2025-05-19 14:32:43.140481 | orchestrator | 2025-05-19 14:32:29 | INFO  | Starting variable preparation from inventory 2025-05-19 14:32:43.140491 | orchestrator | 2025-05-19 14:32:30 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-19 14:32:43.140501 | orchestrator | 2025-05-19 14:32:30 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-19 14:32:43.140511 | orchestrator | 2025-05-19 14:32:30 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-19 14:32:43.140521 | orchestrator | 2025-05-19 14:32:30 | INFO  | 3 file(s) written, 6 host(s) processed 2025-05-19 14:32:43.140569 | orchestrator | 2025-05-19 14:32:30 | INFO  | Variable preparation completed: 2025-05-19 14:32:43.140585 | orchestrator | 2025-05-19 14:32:31 | INFO  | Starting inventory overwrite handling 2025-05-19 14:32:43.140602 | orchestrator | 2025-05-19 14:32:31 | INFO  | Handling group overwrites in 99-overwrite 2025-05-19 14:32:43.140618 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group frr:children from 60-generic 2025-05-19 14:32:43.140653 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group storage:children from 50-kolla 2025-05-19 14:32:43.140663 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-19 14:32:43.140673 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-19 14:32:43.140683 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-19 14:32:43.140692 | orchestrator | 2025-05-19 14:32:31 | INFO  | Handling group overwrites in 20-roles 2025-05-19 14:32:43.140702 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-19 14:32:43.140711 | orchestrator | 2025-05-19 14:32:31 | INFO  | Removed 6 group(s) in total 2025-05-19 14:32:43.140722 | orchestrator | 2025-05-19 14:32:31 | INFO  | Inventory overwrite handling completed 2025-05-19 14:32:43.140731 | orchestrator | 2025-05-19 14:32:32 | INFO  | Starting merge of inventory files 2025-05-19 14:32:43.140741 | orchestrator | 2025-05-19 14:32:32 | INFO  | Inventory files merged successfully 2025-05-19 14:32:43.140762 | orchestrator | 2025-05-19 14:32:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-05-19 14:32:43.140772 | orchestrator | 2025-05-19 14:32:42 | INFO  | Successfully wrote ClusterShell configuration 2025-05-19 14:32:45.101719 | orchestrator | 2025-05-19 14:32:45 | INFO  | Task a39850b2-e2d9-4902-9350-561b7d3d5534 (ceph-create-lvm-devices) was prepared for execution. 2025-05-19 14:32:45.101822 | orchestrator | 2025-05-19 14:32:45 | INFO  | It takes a moment until task a39850b2-e2d9-4902-9350-561b7d3d5534 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-19 14:32:49.088665 | orchestrator | 2025-05-19 14:32:49.090303 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 14:32:49.090342 | orchestrator | 2025-05-19 14:32:49.090356 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:32:49.091117 | orchestrator | Monday 19 May 2025 14:32:49 +0000 (0:00:00.250) 0:00:00.250 ************ 2025-05-19 14:32:49.298994 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 14:32:49.299100 | orchestrator | 2025-05-19 14:32:49.299839 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:32:49.300339 | orchestrator | Monday 19 May 2025 14:32:49 +0000 (0:00:00.212) 0:00:00.463 ************ 2025-05-19 14:32:49.515212 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:32:49.515325 | orchestrator | 2025-05-19 14:32:49.516221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:49.517804 | orchestrator | Monday 19 May 2025 14:32:49 +0000 (0:00:00.216) 0:00:00.680 ************ 2025-05-19 14:32:49.879427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 14:32:49.880271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 14:32:49.881618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 14:32:49.882631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 14:32:49.883738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 14:32:49.884813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 14:32:49.885270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 14:32:49.885900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 14:32:49.886586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 14:32:49.887201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 14:32:49.887751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 14:32:49.888997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 14:32:49.889358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 14:32:49.890010 | orchestrator | 2025-05-19 14:32:49.891184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:49.891917 | orchestrator | Monday 19 May 2025 14:32:49 +0000 (0:00:00.363) 0:00:01.044 ************ 2025-05-19 14:32:50.192029 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:50.193341 | orchestrator | 2025-05-19 14:32:50.193370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:50.194089 | orchestrator | Monday 19 May 2025 14:32:50 +0000 (0:00:00.312) 0:00:01.356 ************ 2025-05-19 14:32:50.346066 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:50.346452 | orchestrator | 2025-05-19 14:32:50.347666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:50.348295 | orchestrator | Monday 19 May 2025 14:32:50 +0000 (0:00:00.154) 0:00:01.510 ************ 2025-05-19 14:32:50.528945 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:50.529096 | orchestrator | 2025-05-19 14:32:50.529353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:50.530105 | orchestrator | Monday 19 May 2025 14:32:50 +0000 (0:00:00.182) 0:00:01.693 ************ 2025-05-19 14:32:50.685727 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:50.685940 | orchestrator | 2025-05-19 14:32:50.686601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:50.687421 | orchestrator | Monday 19 May 2025 14:32:50 +0000 (0:00:00.157) 0:00:01.850 ************ 2025-05-19 14:32:50.837438 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:50.838380 | orchestrator | 2025-05-19 14:32:50.839511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:50.840197 | orchestrator | Monday 19 May 2025 14:32:50 +0000 (0:00:00.152) 0:00:02.002 ************ 2025-05-19 14:32:51.016773 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:51.018497 | orchestrator | 2025-05-19 14:32:51.018917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:51.020078 | orchestrator | Monday 19 May 2025 14:32:51 +0000 (0:00:00.178) 0:00:02.181 ************ 2025-05-19 14:32:51.184089 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:51.184977 | orchestrator | 2025-05-19 14:32:51.186691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:51.186720 | orchestrator | Monday 19 May 2025 14:32:51 +0000 (0:00:00.167) 0:00:02.348 ************ 2025-05-19 14:32:51.358396 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:51.359370 | orchestrator | 2025-05-19 14:32:51.360866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:51.361237 | orchestrator | Monday 19 May 2025 14:32:51 +0000 (0:00:00.174) 0:00:02.522 ************ 2025-05-19 14:32:51.702968 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484) 2025-05-19 14:32:51.704300 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484) 2025-05-19 14:32:51.705408 | orchestrator | 2025-05-19 14:32:51.706455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:51.707124 | orchestrator | Monday 19 May 2025 14:32:51 +0000 (0:00:00.344) 0:00:02.867 ************ 2025-05-19 14:32:52.045874 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0) 2025-05-19 14:32:52.046218 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0) 2025-05-19 14:32:52.047034 | orchestrator | 2025-05-19 14:32:52.047606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:52.048210 | orchestrator | Monday 19 May 2025 14:32:52 +0000 (0:00:00.338) 0:00:03.206 ************ 2025-05-19 14:32:52.527156 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2) 2025-05-19 14:32:52.528837 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2) 2025-05-19 14:32:52.528902 | orchestrator | 2025-05-19 14:32:52.530735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:52.530764 | orchestrator | Monday 19 May 2025 14:32:52 +0000 (0:00:00.484) 0:00:03.691 ************ 2025-05-19 14:32:53.146961 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809) 2025-05-19 14:32:53.147043 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809) 2025-05-19 14:32:53.147789 | orchestrator | 2025-05-19 14:32:53.148455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:32:53.149285 | orchestrator | Monday 19 May 2025 14:32:53 +0000 (0:00:00.619) 0:00:04.311 ************ 2025-05-19 14:32:53.436772 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:32:53.437510 | orchestrator | 2025-05-19 14:32:53.438416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:53.439153 | orchestrator | Monday 19 May 2025 14:32:53 +0000 (0:00:00.289) 0:00:04.601 ************ 2025-05-19 14:32:53.821859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 14:32:53.821970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 14:32:53.822985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 14:32:53.824155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 14:32:53.828111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 14:32:53.831608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 14:32:53.831649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 14:32:53.831661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 14:32:53.831723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 14:32:53.832351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 14:32:53.833342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 14:32:53.834476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 14:32:53.834727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 14:32:53.835282 | orchestrator | 2025-05-19 14:32:53.835889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:53.836580 | orchestrator | Monday 19 May 2025 14:32:53 +0000 (0:00:00.378) 0:00:04.979 ************ 2025-05-19 14:32:54.027741 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:54.028046 | orchestrator | 2025-05-19 14:32:54.029042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:54.030231 | orchestrator | Monday 19 May 2025 14:32:54 +0000 (0:00:00.210) 0:00:05.190 ************ 2025-05-19 14:32:54.235241 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:54.237060 | orchestrator | 2025-05-19 14:32:54.239426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:54.240803 | orchestrator | Monday 19 May 2025 14:32:54 +0000 (0:00:00.205) 0:00:05.395 ************ 2025-05-19 14:32:54.429240 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:54.430974 | orchestrator | 2025-05-19 14:32:54.432233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:54.433314 | orchestrator | Monday 19 May 2025 14:32:54 +0000 (0:00:00.197) 0:00:05.593 ************ 2025-05-19 14:32:54.628311 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:54.630319 | orchestrator | 2025-05-19 14:32:54.631035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:54.631757 | orchestrator | Monday 19 May 2025 14:32:54 +0000 (0:00:00.199) 0:00:05.793 ************ 2025-05-19 14:32:54.829169 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:54.829280 | orchestrator | 2025-05-19 14:32:54.829296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:54.829310 | orchestrator | Monday 19 May 2025 14:32:54 +0000 (0:00:00.197) 0:00:05.991 ************ 2025-05-19 14:32:55.033179 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:55.035836 | orchestrator | 2025-05-19 14:32:55.035904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:55.035953 | orchestrator | Monday 19 May 2025 14:32:55 +0000 (0:00:00.204) 0:00:06.195 ************ 2025-05-19 14:32:55.221777 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:55.221877 | orchestrator | 2025-05-19 14:32:55.221983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:55.222136 | orchestrator | Monday 19 May 2025 14:32:55 +0000 (0:00:00.190) 0:00:06.386 ************ 2025-05-19 14:32:55.401670 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:55.401915 | orchestrator | 2025-05-19 14:32:55.402161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:55.402480 | orchestrator | Monday 19 May 2025 14:32:55 +0000 (0:00:00.179) 0:00:06.565 ************ 2025-05-19 14:32:56.408767 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 14:32:56.410221 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 14:32:56.412346 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 14:32:56.412810 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 14:32:56.413875 | orchestrator | 2025-05-19 14:32:56.414433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:56.415122 | orchestrator | Monday 19 May 2025 14:32:56 +0000 (0:00:01.005) 0:00:07.571 ************ 2025-05-19 14:32:56.611082 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:56.611575 | orchestrator | 2025-05-19 14:32:56.611880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:56.612603 | orchestrator | Monday 19 May 2025 14:32:56 +0000 (0:00:00.204) 0:00:07.775 ************ 2025-05-19 14:32:56.797243 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:56.798111 | orchestrator | 2025-05-19 14:32:56.799113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:56.800013 | orchestrator | Monday 19 May 2025 14:32:56 +0000 (0:00:00.186) 0:00:07.962 ************ 2025-05-19 14:32:56.989009 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:56.989362 | orchestrator | 2025-05-19 14:32:56.990107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:32:56.990762 | orchestrator | Monday 19 May 2025 14:32:56 +0000 (0:00:00.190) 0:00:08.152 ************ 2025-05-19 14:32:57.174355 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:57.174761 | orchestrator | 2025-05-19 14:32:57.175665 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 14:32:57.176258 | orchestrator | Monday 19 May 2025 14:32:57 +0000 (0:00:00.185) 0:00:08.338 ************ 2025-05-19 14:32:57.306791 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:57.306957 | orchestrator | 2025-05-19 14:32:57.307659 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 14:32:57.308281 | orchestrator | Monday 19 May 2025 14:32:57 +0000 (0:00:00.131) 0:00:08.470 ************ 2025-05-19 14:32:57.481203 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f79a0596-c901-5dda-8c3d-7673c0794e9f'}}) 2025-05-19 14:32:57.482187 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be132d09-93e5-58e2-99ec-48d3b83dc2dd'}}) 2025-05-19 14:32:57.482751 | orchestrator | 2025-05-19 14:32:57.483331 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 14:32:57.483914 | orchestrator | Monday 19 May 2025 14:32:57 +0000 (0:00:00.175) 0:00:08.645 ************ 2025-05-19 14:32:59.496185 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'}) 2025-05-19 14:32:59.496299 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'}) 2025-05-19 14:32:59.497207 | orchestrator | 2025-05-19 14:32:59.498501 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 14:32:59.499666 | orchestrator | Monday 19 May 2025 14:32:59 +0000 (0:00:02.013) 0:00:10.658 ************ 2025-05-19 14:32:59.640855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:32:59.640963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:32:59.641730 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:32:59.642419 | orchestrator | 2025-05-19 14:32:59.644210 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 14:32:59.645118 | orchestrator | Monday 19 May 2025 14:32:59 +0000 (0:00:00.145) 0:00:10.804 ************ 2025-05-19 14:33:01.022985 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'}) 2025-05-19 14:33:01.023488 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'}) 2025-05-19 14:33:01.024177 | orchestrator | 2025-05-19 14:33:01.026114 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 14:33:01.026903 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:01.381) 0:00:12.185 ************ 2025-05-19 14:33:01.171391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:01.173357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:01.175106 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:01.176586 | orchestrator | 2025-05-19 14:33:01.178089 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 14:33:01.179379 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:00.150) 0:00:12.335 ************ 2025-05-19 14:33:01.305756 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:01.305984 | orchestrator | 2025-05-19 14:33:01.309788 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 14:33:01.309818 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:00.134) 0:00:12.469 ************ 2025-05-19 14:33:01.659425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:01.660284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:01.660825 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:01.661709 | orchestrator | 2025-05-19 14:33:01.666683 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 14:33:01.666938 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:00.352) 0:00:12.822 ************ 2025-05-19 14:33:01.789866 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:01.790135 | orchestrator | 2025-05-19 14:33:01.790770 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 14:33:01.791200 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:00.132) 0:00:12.955 ************ 2025-05-19 14:33:01.938845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:01.939820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:01.940882 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:01.941615 | orchestrator | 2025-05-19 14:33:01.942467 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 14:33:01.943612 | orchestrator | Monday 19 May 2025 14:33:01 +0000 (0:00:00.148) 0:00:13.103 ************ 2025-05-19 14:33:02.075778 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.076328 | orchestrator | 2025-05-19 14:33:02.076990 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 14:33:02.078359 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.135) 0:00:13.239 ************ 2025-05-19 14:33:02.220879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:02.223412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:02.223444 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.223987 | orchestrator | 2025-05-19 14:33:02.224668 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 14:33:02.225410 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.145) 0:00:13.384 ************ 2025-05-19 14:33:02.350490 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:02.350637 | orchestrator | 2025-05-19 14:33:02.350657 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 14:33:02.351500 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.127) 0:00:13.511 ************ 2025-05-19 14:33:02.502954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:02.503809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:02.505370 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.506656 | orchestrator | 2025-05-19 14:33:02.506957 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 14:33:02.507802 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.154) 0:00:13.666 ************ 2025-05-19 14:33:02.640858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:02.641224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:02.643121 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.643832 | orchestrator | 2025-05-19 14:33:02.644628 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 14:33:02.645419 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.136) 0:00:13.803 ************ 2025-05-19 14:33:02.780693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:02.780955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:02.781598 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.782747 | orchestrator | 2025-05-19 14:33:02.783415 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 14:33:02.783948 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.139) 0:00:13.943 ************ 2025-05-19 14:33:02.901087 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:02.901494 | orchestrator | 2025-05-19 14:33:02.902563 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 14:33:02.903236 | orchestrator | Monday 19 May 2025 14:33:02 +0000 (0:00:00.121) 0:00:14.064 ************ 2025-05-19 14:33:03.030815 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:03.033065 | orchestrator | 2025-05-19 14:33:03.033097 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 14:33:03.033112 | orchestrator | Monday 19 May 2025 14:33:03 +0000 (0:00:00.130) 0:00:14.195 ************ 2025-05-19 14:33:03.154621 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:03.155284 | orchestrator | 2025-05-19 14:33:03.156609 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 14:33:03.157143 | orchestrator | Monday 19 May 2025 14:33:03 +0000 (0:00:00.124) 0:00:14.319 ************ 2025-05-19 14:33:03.452003 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:33:03.452183 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 14:33:03.452735 | orchestrator | } 2025-05-19 14:33:03.453181 | orchestrator | 2025-05-19 14:33:03.454366 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 14:33:03.454429 | orchestrator | Monday 19 May 2025 14:33:03 +0000 (0:00:00.296) 0:00:14.615 ************ 2025-05-19 14:33:03.590745 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:33:03.591820 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 14:33:03.593757 | orchestrator | } 2025-05-19 14:33:03.594333 | orchestrator | 2025-05-19 14:33:03.595259 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 14:33:03.595584 | orchestrator | Monday 19 May 2025 14:33:03 +0000 (0:00:00.138) 0:00:14.754 ************ 2025-05-19 14:33:03.762778 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:33:03.763237 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 14:33:03.763692 | orchestrator | } 2025-05-19 14:33:03.764646 | orchestrator | 2025-05-19 14:33:03.765246 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 14:33:03.766973 | orchestrator | Monday 19 May 2025 14:33:03 +0000 (0:00:00.172) 0:00:14.927 ************ 2025-05-19 14:33:04.395927 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:04.396027 | orchestrator | 2025-05-19 14:33:04.396042 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 14:33:04.396549 | orchestrator | Monday 19 May 2025 14:33:04 +0000 (0:00:00.630) 0:00:15.557 ************ 2025-05-19 14:33:04.886272 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:04.886768 | orchestrator | 2025-05-19 14:33:04.887426 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 14:33:04.889599 | orchestrator | Monday 19 May 2025 14:33:04 +0000 (0:00:00.491) 0:00:16.049 ************ 2025-05-19 14:33:05.371015 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:05.372153 | orchestrator | 2025-05-19 14:33:05.373366 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 14:33:05.374299 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.485) 0:00:16.535 ************ 2025-05-19 14:33:05.504426 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:05.505354 | orchestrator | 2025-05-19 14:33:05.506902 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 14:33:05.507885 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.133) 0:00:16.668 ************ 2025-05-19 14:33:05.608362 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:05.609085 | orchestrator | 2025-05-19 14:33:05.610005 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 14:33:05.611941 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.103) 0:00:16.772 ************ 2025-05-19 14:33:05.712381 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:05.712959 | orchestrator | 2025-05-19 14:33:05.714720 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 14:33:05.715647 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.102) 0:00:16.874 ************ 2025-05-19 14:33:05.851074 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:33:05.851430 | orchestrator |  "vgs_report": { 2025-05-19 14:33:05.854616 | orchestrator |  "vg": [] 2025-05-19 14:33:05.855646 | orchestrator |  } 2025-05-19 14:33:05.856124 | orchestrator | } 2025-05-19 14:33:05.856923 | orchestrator | 2025-05-19 14:33:05.857727 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 14:33:05.858354 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.140) 0:00:17.014 ************ 2025-05-19 14:33:05.978632 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:05.979629 | orchestrator | 2025-05-19 14:33:05.980718 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 14:33:05.981714 | orchestrator | Monday 19 May 2025 14:33:05 +0000 (0:00:00.128) 0:00:17.143 ************ 2025-05-19 14:33:06.115919 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.116264 | orchestrator | 2025-05-19 14:33:06.116656 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 14:33:06.117233 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.137) 0:00:17.280 ************ 2025-05-19 14:33:06.416957 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.417139 | orchestrator | 2025-05-19 14:33:06.417662 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 14:33:06.418779 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.300) 0:00:17.581 ************ 2025-05-19 14:33:06.555022 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.555700 | orchestrator | 2025-05-19 14:33:06.556812 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 14:33:06.557370 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.137) 0:00:17.718 ************ 2025-05-19 14:33:06.686650 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.687405 | orchestrator | 2025-05-19 14:33:06.687924 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 14:33:06.688824 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.130) 0:00:17.849 ************ 2025-05-19 14:33:06.814004 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.814393 | orchestrator | 2025-05-19 14:33:06.815761 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 14:33:06.816407 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.128) 0:00:17.978 ************ 2025-05-19 14:33:06.945853 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:06.946108 | orchestrator | 2025-05-19 14:33:06.946932 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 14:33:06.948061 | orchestrator | Monday 19 May 2025 14:33:06 +0000 (0:00:00.131) 0:00:18.109 ************ 2025-05-19 14:33:07.081448 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.081764 | orchestrator | 2025-05-19 14:33:07.082704 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 14:33:07.083680 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.135) 0:00:18.245 ************ 2025-05-19 14:33:07.216868 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.217455 | orchestrator | 2025-05-19 14:33:07.218890 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 14:33:07.219583 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.135) 0:00:18.381 ************ 2025-05-19 14:33:07.348324 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.349276 | orchestrator | 2025-05-19 14:33:07.349754 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 14:33:07.351393 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.131) 0:00:18.512 ************ 2025-05-19 14:33:07.470318 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.471070 | orchestrator | 2025-05-19 14:33:07.471651 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 14:33:07.472827 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.121) 0:00:18.634 ************ 2025-05-19 14:33:07.611297 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.611486 | orchestrator | 2025-05-19 14:33:07.612142 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 14:33:07.612807 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.140) 0:00:18.775 ************ 2025-05-19 14:33:07.740488 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.740739 | orchestrator | 2025-05-19 14:33:07.741420 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 14:33:07.742160 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.128) 0:00:18.903 ************ 2025-05-19 14:33:07.876425 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:07.877049 | orchestrator | 2025-05-19 14:33:07.877798 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 14:33:07.878910 | orchestrator | Monday 19 May 2025 14:33:07 +0000 (0:00:00.137) 0:00:19.040 ************ 2025-05-19 14:33:08.216968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.217152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.218887 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.220452 | orchestrator | 2025-05-19 14:33:08.224040 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 14:33:08.224075 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.339) 0:00:19.379 ************ 2025-05-19 14:33:08.356561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.356725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.358326 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.358538 | orchestrator | 2025-05-19 14:33:08.361679 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 14:33:08.362278 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.140) 0:00:19.520 ************ 2025-05-19 14:33:08.498165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.498889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.500376 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.501447 | orchestrator | 2025-05-19 14:33:08.505965 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 14:33:08.505994 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.142) 0:00:19.663 ************ 2025-05-19 14:33:08.646859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.647060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.650604 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.651168 | orchestrator | 2025-05-19 14:33:08.652787 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 14:33:08.652837 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.147) 0:00:19.811 ************ 2025-05-19 14:33:08.789146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.790446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.791477 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.792617 | orchestrator | 2025-05-19 14:33:08.793667 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 14:33:08.794173 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.141) 0:00:19.952 ************ 2025-05-19 14:33:08.929957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:08.930167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:08.931348 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:08.933358 | orchestrator | 2025-05-19 14:33:08.933384 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 14:33:08.934081 | orchestrator | Monday 19 May 2025 14:33:08 +0000 (0:00:00.141) 0:00:20.093 ************ 2025-05-19 14:33:09.072133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:09.073005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:09.073233 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:09.074323 | orchestrator | 2025-05-19 14:33:09.075335 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 14:33:09.076234 | orchestrator | Monday 19 May 2025 14:33:09 +0000 (0:00:00.142) 0:00:20.236 ************ 2025-05-19 14:33:09.214224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:09.214741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:09.215436 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:09.216141 | orchestrator | 2025-05-19 14:33:09.216739 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 14:33:09.218889 | orchestrator | Monday 19 May 2025 14:33:09 +0000 (0:00:00.142) 0:00:20.378 ************ 2025-05-19 14:33:09.710462 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:09.710605 | orchestrator | 2025-05-19 14:33:09.710694 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 14:33:09.711703 | orchestrator | Monday 19 May 2025 14:33:09 +0000 (0:00:00.494) 0:00:20.873 ************ 2025-05-19 14:33:10.192659 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:10.195112 | orchestrator | 2025-05-19 14:33:10.195148 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 14:33:10.195162 | orchestrator | Monday 19 May 2025 14:33:10 +0000 (0:00:00.482) 0:00:21.355 ************ 2025-05-19 14:33:10.335620 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:33:10.335811 | orchestrator | 2025-05-19 14:33:10.337572 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 14:33:10.337595 | orchestrator | Monday 19 May 2025 14:33:10 +0000 (0:00:00.142) 0:00:21.498 ************ 2025-05-19 14:33:10.493312 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'vg_name': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'}) 2025-05-19 14:33:10.495079 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'vg_name': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'}) 2025-05-19 14:33:10.498418 | orchestrator | 2025-05-19 14:33:10.498451 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 14:33:10.499257 | orchestrator | Monday 19 May 2025 14:33:10 +0000 (0:00:00.159) 0:00:21.657 ************ 2025-05-19 14:33:10.841880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:10.842713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:10.842766 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:10.842787 | orchestrator | 2025-05-19 14:33:10.843308 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 14:33:10.845114 | orchestrator | Monday 19 May 2025 14:33:10 +0000 (0:00:00.348) 0:00:22.006 ************ 2025-05-19 14:33:10.983613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:10.983761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:10.983776 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:10.983861 | orchestrator | 2025-05-19 14:33:10.984089 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 14:33:10.984200 | orchestrator | Monday 19 May 2025 14:33:10 +0000 (0:00:00.138) 0:00:22.145 ************ 2025-05-19 14:33:11.127924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'})  2025-05-19 14:33:11.128054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'})  2025-05-19 14:33:11.135059 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:33:11.135115 | orchestrator | 2025-05-19 14:33:11.135129 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 14:33:11.135142 | orchestrator | Monday 19 May 2025 14:33:11 +0000 (0:00:00.147) 0:00:22.292 ************ 2025-05-19 14:33:11.406208 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:33:11.406645 | orchestrator |  "lvm_report": { 2025-05-19 14:33:11.407331 | orchestrator |  "lv": [ 2025-05-19 14:33:11.408945 | orchestrator |  { 2025-05-19 14:33:11.408986 | orchestrator |  "lv_name": "osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd", 2025-05-19 14:33:11.409618 | orchestrator |  "vg_name": "ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd" 2025-05-19 14:33:11.411546 | orchestrator |  }, 2025-05-19 14:33:11.411578 | orchestrator |  { 2025-05-19 14:33:11.411774 | orchestrator |  "lv_name": "osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f", 2025-05-19 14:33:11.417788 | orchestrator |  "vg_name": "ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f" 2025-05-19 14:33:11.417815 | orchestrator |  } 2025-05-19 14:33:11.417826 | orchestrator |  ], 2025-05-19 14:33:11.418376 | orchestrator |  "pv": [ 2025-05-19 14:33:11.419502 | orchestrator |  { 2025-05-19 14:33:11.420045 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 14:33:11.421074 | orchestrator |  "vg_name": "ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f" 2025-05-19 14:33:11.421416 | orchestrator |  }, 2025-05-19 14:33:11.422134 | orchestrator |  { 2025-05-19 14:33:11.422701 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 14:33:11.423863 | orchestrator |  "vg_name": "ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd" 2025-05-19 14:33:11.424230 | orchestrator |  } 2025-05-19 14:33:11.425572 | orchestrator |  ] 2025-05-19 14:33:11.425623 | orchestrator |  } 2025-05-19 14:33:11.426172 | orchestrator | } 2025-05-19 14:33:11.426986 | orchestrator | 2025-05-19 14:33:11.427805 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 14:33:11.428408 | orchestrator | 2025-05-19 14:33:11.429165 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:33:11.429810 | orchestrator | Monday 19 May 2025 14:33:11 +0000 (0:00:00.278) 0:00:22.571 ************ 2025-05-19 14:33:11.652798 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 14:33:11.653389 | orchestrator | 2025-05-19 14:33:11.654927 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:33:11.658930 | orchestrator | Monday 19 May 2025 14:33:11 +0000 (0:00:00.245) 0:00:22.816 ************ 2025-05-19 14:33:11.878736 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:11.878899 | orchestrator | 2025-05-19 14:33:11.879161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:11.880223 | orchestrator | Monday 19 May 2025 14:33:11 +0000 (0:00:00.223) 0:00:23.039 ************ 2025-05-19 14:33:12.264624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 14:33:12.265747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 14:33:12.266392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 14:33:12.268023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 14:33:12.269173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 14:33:12.270260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 14:33:12.271838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 14:33:12.272488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 14:33:12.273363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 14:33:12.274588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 14:33:12.275209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 14:33:12.276166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 14:33:12.276997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 14:33:12.277679 | orchestrator | 2025-05-19 14:33:12.278397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:12.279101 | orchestrator | Monday 19 May 2025 14:33:12 +0000 (0:00:00.388) 0:00:23.428 ************ 2025-05-19 14:33:12.461095 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:12.462155 | orchestrator | 2025-05-19 14:33:12.462872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:12.466149 | orchestrator | Monday 19 May 2025 14:33:12 +0000 (0:00:00.195) 0:00:23.624 ************ 2025-05-19 14:33:12.657567 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:12.657796 | orchestrator | 2025-05-19 14:33:12.658335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:12.658921 | orchestrator | Monday 19 May 2025 14:33:12 +0000 (0:00:00.198) 0:00:23.822 ************ 2025-05-19 14:33:13.215853 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:13.216736 | orchestrator | 2025-05-19 14:33:13.217642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:13.218754 | orchestrator | Monday 19 May 2025 14:33:13 +0000 (0:00:00.556) 0:00:24.379 ************ 2025-05-19 14:33:13.403772 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:13.405132 | orchestrator | 2025-05-19 14:33:13.407844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:13.413189 | orchestrator | Monday 19 May 2025 14:33:13 +0000 (0:00:00.188) 0:00:24.567 ************ 2025-05-19 14:33:13.591134 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:13.591379 | orchestrator | 2025-05-19 14:33:13.591837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:13.592458 | orchestrator | Monday 19 May 2025 14:33:13 +0000 (0:00:00.188) 0:00:24.756 ************ 2025-05-19 14:33:13.779865 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:13.781788 | orchestrator | 2025-05-19 14:33:13.782086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:13.783725 | orchestrator | Monday 19 May 2025 14:33:13 +0000 (0:00:00.187) 0:00:24.944 ************ 2025-05-19 14:33:13.975985 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:13.977873 | orchestrator | 2025-05-19 14:33:13.977908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:13.978645 | orchestrator | Monday 19 May 2025 14:33:13 +0000 (0:00:00.194) 0:00:25.139 ************ 2025-05-19 14:33:14.160740 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:14.161785 | orchestrator | 2025-05-19 14:33:14.162981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:14.163996 | orchestrator | Monday 19 May 2025 14:33:14 +0000 (0:00:00.185) 0:00:25.324 ************ 2025-05-19 14:33:14.605478 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e) 2025-05-19 14:33:14.605737 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e) 2025-05-19 14:33:14.606569 | orchestrator | 2025-05-19 14:33:14.607564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:14.608091 | orchestrator | Monday 19 May 2025 14:33:14 +0000 (0:00:00.438) 0:00:25.763 ************ 2025-05-19 14:33:15.009997 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538) 2025-05-19 14:33:15.010794 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538) 2025-05-19 14:33:15.011343 | orchestrator | 2025-05-19 14:33:15.012449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:15.013171 | orchestrator | Monday 19 May 2025 14:33:15 +0000 (0:00:00.409) 0:00:26.172 ************ 2025-05-19 14:33:15.418387 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964) 2025-05-19 14:33:15.418748 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964) 2025-05-19 14:33:15.419631 | orchestrator | 2025-05-19 14:33:15.420337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:15.420665 | orchestrator | Monday 19 May 2025 14:33:15 +0000 (0:00:00.409) 0:00:26.582 ************ 2025-05-19 14:33:15.844138 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a) 2025-05-19 14:33:15.844301 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a) 2025-05-19 14:33:15.844854 | orchestrator | 2025-05-19 14:33:15.845252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:15.845717 | orchestrator | Monday 19 May 2025 14:33:15 +0000 (0:00:00.425) 0:00:27.008 ************ 2025-05-19 14:33:16.162099 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:33:16.162888 | orchestrator | 2025-05-19 14:33:16.163332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:16.164206 | orchestrator | Monday 19 May 2025 14:33:16 +0000 (0:00:00.317) 0:00:27.326 ************ 2025-05-19 14:33:16.721392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 14:33:16.722250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 14:33:16.722951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 14:33:16.724258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 14:33:16.725360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 14:33:16.725432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 14:33:16.725889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 14:33:16.726461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 14:33:16.726941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 14:33:16.727438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 14:33:16.728004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 14:33:16.728648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 14:33:16.729072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 14:33:16.729394 | orchestrator | 2025-05-19 14:33:16.729873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:16.731000 | orchestrator | Monday 19 May 2025 14:33:16 +0000 (0:00:00.558) 0:00:27.885 ************ 2025-05-19 14:33:16.917290 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:16.917550 | orchestrator | 2025-05-19 14:33:16.919680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:16.919707 | orchestrator | Monday 19 May 2025 14:33:16 +0000 (0:00:00.194) 0:00:28.079 ************ 2025-05-19 14:33:17.111564 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:17.111956 | orchestrator | 2025-05-19 14:33:17.112601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:17.113411 | orchestrator | Monday 19 May 2025 14:33:17 +0000 (0:00:00.195) 0:00:28.274 ************ 2025-05-19 14:33:17.304492 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:17.305323 | orchestrator | 2025-05-19 14:33:17.305851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:17.306331 | orchestrator | Monday 19 May 2025 14:33:17 +0000 (0:00:00.194) 0:00:28.468 ************ 2025-05-19 14:33:17.498313 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:17.499147 | orchestrator | 2025-05-19 14:33:17.500056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:17.501769 | orchestrator | Monday 19 May 2025 14:33:17 +0000 (0:00:00.193) 0:00:28.662 ************ 2025-05-19 14:33:17.705399 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:17.706407 | orchestrator | 2025-05-19 14:33:17.706647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:17.707853 | orchestrator | Monday 19 May 2025 14:33:17 +0000 (0:00:00.206) 0:00:28.869 ************ 2025-05-19 14:33:17.908948 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:17.909550 | orchestrator | 2025-05-19 14:33:17.910832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:17.911924 | orchestrator | Monday 19 May 2025 14:33:17 +0000 (0:00:00.203) 0:00:29.073 ************ 2025-05-19 14:33:18.101393 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:18.101649 | orchestrator | 2025-05-19 14:33:18.103320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:18.103543 | orchestrator | Monday 19 May 2025 14:33:18 +0000 (0:00:00.189) 0:00:29.262 ************ 2025-05-19 14:33:18.285071 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:18.285420 | orchestrator | 2025-05-19 14:33:18.286987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:18.288629 | orchestrator | Monday 19 May 2025 14:33:18 +0000 (0:00:00.186) 0:00:29.449 ************ 2025-05-19 14:33:19.078772 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 14:33:19.078885 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 14:33:19.078899 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 14:33:19.078971 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 14:33:19.079154 | orchestrator | 2025-05-19 14:33:19.079529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:19.079897 | orchestrator | Monday 19 May 2025 14:33:19 +0000 (0:00:00.791) 0:00:30.241 ************ 2025-05-19 14:33:19.265108 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:19.265570 | orchestrator | 2025-05-19 14:33:19.266421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:19.267178 | orchestrator | Monday 19 May 2025 14:33:19 +0000 (0:00:00.187) 0:00:30.428 ************ 2025-05-19 14:33:19.454613 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:19.455771 | orchestrator | 2025-05-19 14:33:19.455970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:19.456851 | orchestrator | Monday 19 May 2025 14:33:19 +0000 (0:00:00.189) 0:00:30.618 ************ 2025-05-19 14:33:20.024733 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:20.025833 | orchestrator | 2025-05-19 14:33:20.027125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:20.028339 | orchestrator | Monday 19 May 2025 14:33:20 +0000 (0:00:00.568) 0:00:31.187 ************ 2025-05-19 14:33:20.217726 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:20.218452 | orchestrator | 2025-05-19 14:33:20.219495 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 14:33:20.220339 | orchestrator | Monday 19 May 2025 14:33:20 +0000 (0:00:00.194) 0:00:31.382 ************ 2025-05-19 14:33:20.379359 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:20.379757 | orchestrator | 2025-05-19 14:33:20.380565 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 14:33:20.381333 | orchestrator | Monday 19 May 2025 14:33:20 +0000 (0:00:00.161) 0:00:31.543 ************ 2025-05-19 14:33:20.566476 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14b77220-8a02-5c14-b369-aaa75d02e7a5'}}) 2025-05-19 14:33:20.566854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd28da045-49d6-58b1-95f0-26301c413660'}}) 2025-05-19 14:33:20.567636 | orchestrator | 2025-05-19 14:33:20.568397 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 14:33:20.569114 | orchestrator | Monday 19 May 2025 14:33:20 +0000 (0:00:00.186) 0:00:31.730 ************ 2025-05-19 14:33:22.318178 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'}) 2025-05-19 14:33:22.319199 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'}) 2025-05-19 14:33:22.320880 | orchestrator | 2025-05-19 14:33:22.323000 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 14:33:22.323924 | orchestrator | Monday 19 May 2025 14:33:22 +0000 (0:00:01.749) 0:00:33.480 ************ 2025-05-19 14:33:22.474737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:22.475155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:22.475969 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:22.477188 | orchestrator | 2025-05-19 14:33:22.477865 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 14:33:22.478965 | orchestrator | Monday 19 May 2025 14:33:22 +0000 (0:00:00.155) 0:00:33.636 ************ 2025-05-19 14:33:23.749129 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'}) 2025-05-19 14:33:23.751542 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'}) 2025-05-19 14:33:23.751636 | orchestrator | 2025-05-19 14:33:23.752733 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 14:33:23.753457 | orchestrator | Monday 19 May 2025 14:33:23 +0000 (0:00:01.275) 0:00:34.911 ************ 2025-05-19 14:33:23.900757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:23.900870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:23.900884 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:23.901195 | orchestrator | 2025-05-19 14:33:23.902066 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 14:33:23.905072 | orchestrator | Monday 19 May 2025 14:33:23 +0000 (0:00:00.148) 0:00:35.059 ************ 2025-05-19 14:33:24.038366 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.040813 | orchestrator | 2025-05-19 14:33:24.041929 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 14:33:24.043354 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.142) 0:00:35.201 ************ 2025-05-19 14:33:24.195162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:24.196074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:24.196554 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.197257 | orchestrator | 2025-05-19 14:33:24.203037 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 14:33:24.203120 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.158) 0:00:35.360 ************ 2025-05-19 14:33:24.339445 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.339593 | orchestrator | 2025-05-19 14:33:24.339744 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 14:33:24.339870 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.143) 0:00:35.503 ************ 2025-05-19 14:33:24.477655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:24.478752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:24.479846 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.480461 | orchestrator | 2025-05-19 14:33:24.481470 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 14:33:24.482431 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.136) 0:00:35.640 ************ 2025-05-19 14:33:24.800713 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.802186 | orchestrator | 2025-05-19 14:33:24.804206 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 14:33:24.804296 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.324) 0:00:35.964 ************ 2025-05-19 14:33:24.978223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:24.978391 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:24.979218 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:24.979495 | orchestrator | 2025-05-19 14:33:24.979713 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 14:33:24.980011 | orchestrator | Monday 19 May 2025 14:33:24 +0000 (0:00:00.178) 0:00:36.143 ************ 2025-05-19 14:33:25.103191 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:25.105811 | orchestrator | 2025-05-19 14:33:25.106237 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 14:33:25.107199 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.123) 0:00:36.267 ************ 2025-05-19 14:33:25.248100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:25.248840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:25.250009 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.251773 | orchestrator | 2025-05-19 14:33:25.251807 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 14:33:25.252277 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.145) 0:00:36.412 ************ 2025-05-19 14:33:25.400241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:25.400758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:25.402151 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.402996 | orchestrator | 2025-05-19 14:33:25.404329 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 14:33:25.404698 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.152) 0:00:36.564 ************ 2025-05-19 14:33:25.546533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:25.547054 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:25.549126 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.549180 | orchestrator | 2025-05-19 14:33:25.549438 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 14:33:25.550293 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.145) 0:00:36.709 ************ 2025-05-19 14:33:25.671222 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.672162 | orchestrator | 2025-05-19 14:33:25.672484 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 14:33:25.673275 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.125) 0:00:36.835 ************ 2025-05-19 14:33:25.802461 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.803074 | orchestrator | 2025-05-19 14:33:25.803680 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 14:33:25.804615 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.132) 0:00:36.967 ************ 2025-05-19 14:33:25.931183 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:25.932178 | orchestrator | 2025-05-19 14:33:25.932637 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 14:33:25.933741 | orchestrator | Monday 19 May 2025 14:33:25 +0000 (0:00:00.127) 0:00:37.094 ************ 2025-05-19 14:33:26.066677 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:33:26.067441 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 14:33:26.067748 | orchestrator | } 2025-05-19 14:33:26.070740 | orchestrator | 2025-05-19 14:33:26.071221 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 14:33:26.071846 | orchestrator | Monday 19 May 2025 14:33:26 +0000 (0:00:00.136) 0:00:37.231 ************ 2025-05-19 14:33:26.196007 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:33:26.196796 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 14:33:26.197460 | orchestrator | } 2025-05-19 14:33:26.198154 | orchestrator | 2025-05-19 14:33:26.198715 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 14:33:26.199741 | orchestrator | Monday 19 May 2025 14:33:26 +0000 (0:00:00.128) 0:00:37.360 ************ 2025-05-19 14:33:26.337389 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:33:26.338447 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 14:33:26.338858 | orchestrator | } 2025-05-19 14:33:26.339756 | orchestrator | 2025-05-19 14:33:26.340082 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 14:33:26.340701 | orchestrator | Monday 19 May 2025 14:33:26 +0000 (0:00:00.140) 0:00:37.501 ************ 2025-05-19 14:33:27.024922 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:27.025532 | orchestrator | 2025-05-19 14:33:27.026449 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 14:33:27.027006 | orchestrator | Monday 19 May 2025 14:33:27 +0000 (0:00:00.686) 0:00:38.187 ************ 2025-05-19 14:33:27.551553 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:27.552293 | orchestrator | 2025-05-19 14:33:27.553397 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 14:33:27.554259 | orchestrator | Monday 19 May 2025 14:33:27 +0000 (0:00:00.527) 0:00:38.714 ************ 2025-05-19 14:33:28.053075 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:28.053241 | orchestrator | 2025-05-19 14:33:28.054339 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 14:33:28.055289 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.501) 0:00:39.216 ************ 2025-05-19 14:33:28.198543 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:28.199101 | orchestrator | 2025-05-19 14:33:28.200022 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 14:33:28.201647 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.146) 0:00:39.362 ************ 2025-05-19 14:33:28.306250 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:28.308207 | orchestrator | 2025-05-19 14:33:28.308947 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 14:33:28.309658 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.107) 0:00:39.470 ************ 2025-05-19 14:33:28.414931 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:28.415662 | orchestrator | 2025-05-19 14:33:28.416559 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 14:33:28.417180 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.107) 0:00:39.578 ************ 2025-05-19 14:33:28.550739 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:33:28.551384 | orchestrator |  "vgs_report": { 2025-05-19 14:33:28.552858 | orchestrator |  "vg": [] 2025-05-19 14:33:28.553289 | orchestrator |  } 2025-05-19 14:33:28.553973 | orchestrator | } 2025-05-19 14:33:28.555253 | orchestrator | 2025-05-19 14:33:28.555643 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 14:33:28.556339 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.135) 0:00:39.714 ************ 2025-05-19 14:33:28.686451 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:28.687308 | orchestrator | 2025-05-19 14:33:28.688219 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 14:33:28.689218 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.136) 0:00:39.850 ************ 2025-05-19 14:33:28.806414 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:28.806671 | orchestrator | 2025-05-19 14:33:28.807432 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 14:33:28.808403 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.120) 0:00:39.970 ************ 2025-05-19 14:33:28.925955 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:28.926434 | orchestrator | 2025-05-19 14:33:28.927515 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 14:33:28.928111 | orchestrator | Monday 19 May 2025 14:33:28 +0000 (0:00:00.119) 0:00:40.090 ************ 2025-05-19 14:33:29.063232 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.064035 | orchestrator | 2025-05-19 14:33:29.065142 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 14:33:29.065877 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.136) 0:00:40.227 ************ 2025-05-19 14:33:29.194738 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.194841 | orchestrator | 2025-05-19 14:33:29.195762 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 14:33:29.196208 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.131) 0:00:40.358 ************ 2025-05-19 14:33:29.511796 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.512103 | orchestrator | 2025-05-19 14:33:29.512946 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 14:33:29.513161 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.316) 0:00:40.675 ************ 2025-05-19 14:33:29.671052 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.671235 | orchestrator | 2025-05-19 14:33:29.671600 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 14:33:29.672528 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.159) 0:00:40.834 ************ 2025-05-19 14:33:29.807949 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.808050 | orchestrator | 2025-05-19 14:33:29.808857 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 14:33:29.810615 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.136) 0:00:40.971 ************ 2025-05-19 14:33:29.955408 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:29.955992 | orchestrator | 2025-05-19 14:33:29.956789 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 14:33:29.957759 | orchestrator | Monday 19 May 2025 14:33:29 +0000 (0:00:00.147) 0:00:41.118 ************ 2025-05-19 14:33:30.094168 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.094544 | orchestrator | 2025-05-19 14:33:30.095538 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 14:33:30.096164 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.140) 0:00:41.258 ************ 2025-05-19 14:33:30.220721 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.220990 | orchestrator | 2025-05-19 14:33:30.221723 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 14:33:30.222405 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.126) 0:00:41.384 ************ 2025-05-19 14:33:30.352944 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.353886 | orchestrator | 2025-05-19 14:33:30.354449 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 14:33:30.355556 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.131) 0:00:41.516 ************ 2025-05-19 14:33:30.480641 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.481162 | orchestrator | 2025-05-19 14:33:30.482600 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 14:33:30.483362 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.128) 0:00:41.645 ************ 2025-05-19 14:33:30.618597 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.619251 | orchestrator | 2025-05-19 14:33:30.620713 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 14:33:30.621200 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.136) 0:00:41.781 ************ 2025-05-19 14:33:30.777097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:30.777838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:30.779061 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.780035 | orchestrator | 2025-05-19 14:33:30.780766 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 14:33:30.781614 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.159) 0:00:41.940 ************ 2025-05-19 14:33:30.924561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:30.924683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:30.925915 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:30.926775 | orchestrator | 2025-05-19 14:33:30.927404 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 14:33:30.928113 | orchestrator | Monday 19 May 2025 14:33:30 +0000 (0:00:00.145) 0:00:42.086 ************ 2025-05-19 14:33:31.068056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.068372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.069806 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.070704 | orchestrator | 2025-05-19 14:33:31.071665 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 14:33:31.072285 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.145) 0:00:42.232 ************ 2025-05-19 14:33:31.388203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.389045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.390155 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.391338 | orchestrator | 2025-05-19 14:33:31.392245 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 14:33:31.393230 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.320) 0:00:42.552 ************ 2025-05-19 14:33:31.531398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.532697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.534689 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.535586 | orchestrator | 2025-05-19 14:33:31.536145 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 14:33:31.536825 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.143) 0:00:42.695 ************ 2025-05-19 14:33:31.677855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.678608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.678636 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.679551 | orchestrator | 2025-05-19 14:33:31.680720 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 14:33:31.682182 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.145) 0:00:42.841 ************ 2025-05-19 14:33:31.824154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.824369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.825600 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.826178 | orchestrator | 2025-05-19 14:33:31.826599 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 14:33:31.827323 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.147) 0:00:42.988 ************ 2025-05-19 14:33:31.973559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:31.974185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:31.974772 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:31.976490 | orchestrator | 2025-05-19 14:33:31.976552 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 14:33:31.977051 | orchestrator | Monday 19 May 2025 14:33:31 +0000 (0:00:00.148) 0:00:43.137 ************ 2025-05-19 14:33:32.463650 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:32.464109 | orchestrator | 2025-05-19 14:33:32.465084 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 14:33:32.466096 | orchestrator | Monday 19 May 2025 14:33:32 +0000 (0:00:00.489) 0:00:43.627 ************ 2025-05-19 14:33:32.966159 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:32.966345 | orchestrator | 2025-05-19 14:33:32.966945 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 14:33:32.968247 | orchestrator | Monday 19 May 2025 14:33:32 +0000 (0:00:00.501) 0:00:44.129 ************ 2025-05-19 14:33:33.097967 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:33:33.098700 | orchestrator | 2025-05-19 14:33:33.098732 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 14:33:33.099175 | orchestrator | Monday 19 May 2025 14:33:33 +0000 (0:00:00.133) 0:00:44.262 ************ 2025-05-19 14:33:33.255077 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'vg_name': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'}) 2025-05-19 14:33:33.255230 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'vg_name': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'}) 2025-05-19 14:33:33.255730 | orchestrator | 2025-05-19 14:33:33.256889 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 14:33:33.257953 | orchestrator | Monday 19 May 2025 14:33:33 +0000 (0:00:00.156) 0:00:44.418 ************ 2025-05-19 14:33:33.402341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:33.402713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:33.403446 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:33.404230 | orchestrator | 2025-05-19 14:33:33.406411 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 14:33:33.406951 | orchestrator | Monday 19 May 2025 14:33:33 +0000 (0:00:00.147) 0:00:44.566 ************ 2025-05-19 14:33:33.548444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:33.549252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:33.550003 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:33.550941 | orchestrator | 2025-05-19 14:33:33.551621 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 14:33:33.552873 | orchestrator | Monday 19 May 2025 14:33:33 +0000 (0:00:00.147) 0:00:44.713 ************ 2025-05-19 14:33:33.699603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'})  2025-05-19 14:33:33.700092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'})  2025-05-19 14:33:33.700980 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:33:33.702101 | orchestrator | 2025-05-19 14:33:33.703018 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 14:33:33.703779 | orchestrator | Monday 19 May 2025 14:33:33 +0000 (0:00:00.147) 0:00:44.861 ************ 2025-05-19 14:33:34.157868 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:33:34.158537 | orchestrator |  "lvm_report": { 2025-05-19 14:33:34.159887 | orchestrator |  "lv": [ 2025-05-19 14:33:34.160833 | orchestrator |  { 2025-05-19 14:33:34.162372 | orchestrator |  "lv_name": "osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5", 2025-05-19 14:33:34.162798 | orchestrator |  "vg_name": "ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5" 2025-05-19 14:33:34.163786 | orchestrator |  }, 2025-05-19 14:33:34.164958 | orchestrator |  { 2025-05-19 14:33:34.165596 | orchestrator |  "lv_name": "osd-block-d28da045-49d6-58b1-95f0-26301c413660", 2025-05-19 14:33:34.166275 | orchestrator |  "vg_name": "ceph-d28da045-49d6-58b1-95f0-26301c413660" 2025-05-19 14:33:34.166434 | orchestrator |  } 2025-05-19 14:33:34.167091 | orchestrator |  ], 2025-05-19 14:33:34.168893 | orchestrator |  "pv": [ 2025-05-19 14:33:34.168920 | orchestrator |  { 2025-05-19 14:33:34.169087 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 14:33:34.169493 | orchestrator |  "vg_name": "ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5" 2025-05-19 14:33:34.169890 | orchestrator |  }, 2025-05-19 14:33:34.170738 | orchestrator |  { 2025-05-19 14:33:34.170762 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 14:33:34.170974 | orchestrator |  "vg_name": "ceph-d28da045-49d6-58b1-95f0-26301c413660" 2025-05-19 14:33:34.172011 | orchestrator |  } 2025-05-19 14:33:34.172711 | orchestrator |  ] 2025-05-19 14:33:34.173341 | orchestrator |  } 2025-05-19 14:33:34.173922 | orchestrator | } 2025-05-19 14:33:34.174539 | orchestrator | 2025-05-19 14:33:34.175171 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 14:33:34.175916 | orchestrator | 2025-05-19 14:33:34.176306 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 14:33:34.176913 | orchestrator | Monday 19 May 2025 14:33:34 +0000 (0:00:00.460) 0:00:45.322 ************ 2025-05-19 14:33:34.386638 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 14:33:34.386855 | orchestrator | 2025-05-19 14:33:34.387593 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 14:33:34.388471 | orchestrator | Monday 19 May 2025 14:33:34 +0000 (0:00:00.228) 0:00:45.550 ************ 2025-05-19 14:33:34.618658 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:34.619212 | orchestrator | 2025-05-19 14:33:34.619699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:34.620418 | orchestrator | Monday 19 May 2025 14:33:34 +0000 (0:00:00.231) 0:00:45.782 ************ 2025-05-19 14:33:35.000809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 14:33:35.000920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 14:33:35.001845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 14:33:35.002548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 14:33:35.003418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 14:33:35.004149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 14:33:35.004758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 14:33:35.005420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 14:33:35.005924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 14:33:35.006467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 14:33:35.007111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 14:33:35.007664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 14:33:35.008140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 14:33:35.008617 | orchestrator | 2025-05-19 14:33:35.009260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.009798 | orchestrator | Monday 19 May 2025 14:33:34 +0000 (0:00:00.382) 0:00:46.165 ************ 2025-05-19 14:33:35.178849 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:35.179096 | orchestrator | 2025-05-19 14:33:35.180077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.181452 | orchestrator | Monday 19 May 2025 14:33:35 +0000 (0:00:00.177) 0:00:46.342 ************ 2025-05-19 14:33:35.360321 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:35.360544 | orchestrator | 2025-05-19 14:33:35.361057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.362117 | orchestrator | Monday 19 May 2025 14:33:35 +0000 (0:00:00.182) 0:00:46.524 ************ 2025-05-19 14:33:35.550629 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:35.550985 | orchestrator | 2025-05-19 14:33:35.552023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.552826 | orchestrator | Monday 19 May 2025 14:33:35 +0000 (0:00:00.189) 0:00:46.714 ************ 2025-05-19 14:33:35.751159 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:35.751405 | orchestrator | 2025-05-19 14:33:35.753375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.753786 | orchestrator | Monday 19 May 2025 14:33:35 +0000 (0:00:00.198) 0:00:46.913 ************ 2025-05-19 14:33:35.937629 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:35.937837 | orchestrator | 2025-05-19 14:33:35.938271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:35.938957 | orchestrator | Monday 19 May 2025 14:33:35 +0000 (0:00:00.187) 0:00:47.101 ************ 2025-05-19 14:33:36.468986 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:36.469490 | orchestrator | 2025-05-19 14:33:36.470563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:36.471366 | orchestrator | Monday 19 May 2025 14:33:36 +0000 (0:00:00.531) 0:00:47.633 ************ 2025-05-19 14:33:36.667323 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:36.668182 | orchestrator | 2025-05-19 14:33:36.669592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:36.671094 | orchestrator | Monday 19 May 2025 14:33:36 +0000 (0:00:00.197) 0:00:47.831 ************ 2025-05-19 14:33:36.857371 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:36.857647 | orchestrator | 2025-05-19 14:33:36.858384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:36.859439 | orchestrator | Monday 19 May 2025 14:33:36 +0000 (0:00:00.191) 0:00:48.022 ************ 2025-05-19 14:33:37.263729 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4) 2025-05-19 14:33:37.264705 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4) 2025-05-19 14:33:37.265147 | orchestrator | 2025-05-19 14:33:37.266725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:37.267222 | orchestrator | Monday 19 May 2025 14:33:37 +0000 (0:00:00.404) 0:00:48.426 ************ 2025-05-19 14:33:37.662669 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834) 2025-05-19 14:33:37.662772 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834) 2025-05-19 14:33:37.663927 | orchestrator | 2025-05-19 14:33:37.664549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:37.665588 | orchestrator | Monday 19 May 2025 14:33:37 +0000 (0:00:00.400) 0:00:48.826 ************ 2025-05-19 14:33:38.083430 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738) 2025-05-19 14:33:38.083821 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738) 2025-05-19 14:33:38.084559 | orchestrator | 2025-05-19 14:33:38.085620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:38.087685 | orchestrator | Monday 19 May 2025 14:33:38 +0000 (0:00:00.420) 0:00:49.247 ************ 2025-05-19 14:33:38.484969 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb) 2025-05-19 14:33:38.487344 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb) 2025-05-19 14:33:38.488172 | orchestrator | 2025-05-19 14:33:38.488993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 14:33:38.489768 | orchestrator | Monday 19 May 2025 14:33:38 +0000 (0:00:00.400) 0:00:49.648 ************ 2025-05-19 14:33:38.809862 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 14:33:38.810313 | orchestrator | 2025-05-19 14:33:38.811263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:38.814718 | orchestrator | Monday 19 May 2025 14:33:38 +0000 (0:00:00.323) 0:00:49.972 ************ 2025-05-19 14:33:39.214910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 14:33:39.215836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 14:33:39.217075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 14:33:39.218073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 14:33:39.219823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 14:33:39.220734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 14:33:39.222154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 14:33:39.223053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 14:33:39.223914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 14:33:39.224628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 14:33:39.225753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 14:33:39.226724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 14:33:39.227192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 14:33:39.227963 | orchestrator | 2025-05-19 14:33:39.228793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:39.229363 | orchestrator | Monday 19 May 2025 14:33:39 +0000 (0:00:00.406) 0:00:50.378 ************ 2025-05-19 14:33:39.401383 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:39.402799 | orchestrator | 2025-05-19 14:33:39.403760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:39.404862 | orchestrator | Monday 19 May 2025 14:33:39 +0000 (0:00:00.186) 0:00:50.565 ************ 2025-05-19 14:33:39.604290 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:39.606636 | orchestrator | 2025-05-19 14:33:39.606956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:39.607631 | orchestrator | Monday 19 May 2025 14:33:39 +0000 (0:00:00.203) 0:00:50.768 ************ 2025-05-19 14:33:40.178835 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:40.179029 | orchestrator | 2025-05-19 14:33:40.180029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:40.181045 | orchestrator | Monday 19 May 2025 14:33:40 +0000 (0:00:00.573) 0:00:51.342 ************ 2025-05-19 14:33:40.375481 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:40.375746 | orchestrator | 2025-05-19 14:33:40.376613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:40.377387 | orchestrator | Monday 19 May 2025 14:33:40 +0000 (0:00:00.197) 0:00:51.539 ************ 2025-05-19 14:33:40.570829 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:40.570922 | orchestrator | 2025-05-19 14:33:40.572402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:40.572425 | orchestrator | Monday 19 May 2025 14:33:40 +0000 (0:00:00.195) 0:00:51.735 ************ 2025-05-19 14:33:40.767181 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:40.767913 | orchestrator | 2025-05-19 14:33:40.768602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:40.769589 | orchestrator | Monday 19 May 2025 14:33:40 +0000 (0:00:00.195) 0:00:51.930 ************ 2025-05-19 14:33:40.958543 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:40.958735 | orchestrator | 2025-05-19 14:33:40.959878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:40.960379 | orchestrator | Monday 19 May 2025 14:33:40 +0000 (0:00:00.192) 0:00:52.123 ************ 2025-05-19 14:33:41.152043 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:41.152198 | orchestrator | 2025-05-19 14:33:41.153076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:41.154005 | orchestrator | Monday 19 May 2025 14:33:41 +0000 (0:00:00.193) 0:00:52.316 ************ 2025-05-19 14:33:41.789145 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 14:33:41.789955 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 14:33:41.790180 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 14:33:41.791579 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 14:33:41.792948 | orchestrator | 2025-05-19 14:33:41.793879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:41.794413 | orchestrator | Monday 19 May 2025 14:33:41 +0000 (0:00:00.634) 0:00:52.951 ************ 2025-05-19 14:33:41.981011 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:41.981482 | orchestrator | 2025-05-19 14:33:41.982357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:41.983537 | orchestrator | Monday 19 May 2025 14:33:41 +0000 (0:00:00.193) 0:00:53.145 ************ 2025-05-19 14:33:42.189843 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:42.190336 | orchestrator | 2025-05-19 14:33:42.190749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:42.191631 | orchestrator | Monday 19 May 2025 14:33:42 +0000 (0:00:00.208) 0:00:53.354 ************ 2025-05-19 14:33:42.396125 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:42.397050 | orchestrator | 2025-05-19 14:33:42.397783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 14:33:42.399816 | orchestrator | Monday 19 May 2025 14:33:42 +0000 (0:00:00.205) 0:00:53.559 ************ 2025-05-19 14:33:42.581755 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:42.582559 | orchestrator | 2025-05-19 14:33:42.583709 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 14:33:42.584803 | orchestrator | Monday 19 May 2025 14:33:42 +0000 (0:00:00.186) 0:00:53.746 ************ 2025-05-19 14:33:42.892075 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:42.892405 | orchestrator | 2025-05-19 14:33:42.893463 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 14:33:42.894870 | orchestrator | Monday 19 May 2025 14:33:42 +0000 (0:00:00.309) 0:00:54.055 ************ 2025-05-19 14:33:43.093437 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '18cd8a80-96d5-5946-80eb-7d63885b2b76'}}) 2025-05-19 14:33:43.094286 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad566f4e-67fb-565a-8346-037c8100dc24'}}) 2025-05-19 14:33:43.094337 | orchestrator | 2025-05-19 14:33:43.095126 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 14:33:43.095769 | orchestrator | Monday 19 May 2025 14:33:43 +0000 (0:00:00.201) 0:00:54.256 ************ 2025-05-19 14:33:44.899210 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'}) 2025-05-19 14:33:44.899318 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'}) 2025-05-19 14:33:44.900134 | orchestrator | 2025-05-19 14:33:44.900159 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 14:33:44.900941 | orchestrator | Monday 19 May 2025 14:33:44 +0000 (0:00:01.805) 0:00:56.061 ************ 2025-05-19 14:33:45.047135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:45.047243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:45.047980 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:45.048579 | orchestrator | 2025-05-19 14:33:45.049234 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 14:33:45.049611 | orchestrator | Monday 19 May 2025 14:33:45 +0000 (0:00:00.149) 0:00:56.211 ************ 2025-05-19 14:33:46.353429 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'}) 2025-05-19 14:33:46.353989 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'}) 2025-05-19 14:33:46.354559 | orchestrator | 2025-05-19 14:33:46.356090 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 14:33:46.356713 | orchestrator | Monday 19 May 2025 14:33:46 +0000 (0:00:01.304) 0:00:57.516 ************ 2025-05-19 14:33:46.506117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:46.506587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:46.508015 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:46.508898 | orchestrator | 2025-05-19 14:33:46.509263 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 14:33:46.510051 | orchestrator | Monday 19 May 2025 14:33:46 +0000 (0:00:00.153) 0:00:57.669 ************ 2025-05-19 14:33:46.633672 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:46.634675 | orchestrator | 2025-05-19 14:33:46.637419 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 14:33:46.637442 | orchestrator | Monday 19 May 2025 14:33:46 +0000 (0:00:00.128) 0:00:57.798 ************ 2025-05-19 14:33:46.778234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:46.779380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:46.780751 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:46.781811 | orchestrator | 2025-05-19 14:33:46.782746 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 14:33:46.783406 | orchestrator | Monday 19 May 2025 14:33:46 +0000 (0:00:00.144) 0:00:57.942 ************ 2025-05-19 14:33:46.917573 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:46.917685 | orchestrator | 2025-05-19 14:33:46.917835 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 14:33:46.918149 | orchestrator | Monday 19 May 2025 14:33:46 +0000 (0:00:00.139) 0:00:58.082 ************ 2025-05-19 14:33:47.069101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:47.070172 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:47.071004 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:47.072006 | orchestrator | 2025-05-19 14:33:47.073626 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 14:33:47.074169 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.148) 0:00:58.231 ************ 2025-05-19 14:33:47.210386 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:47.210459 | orchestrator | 2025-05-19 14:33:47.210564 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 14:33:47.210966 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.144) 0:00:58.375 ************ 2025-05-19 14:33:47.343415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:47.344022 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:47.345075 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:47.345549 | orchestrator | 2025-05-19 14:33:47.346754 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 14:33:47.347332 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.131) 0:00:58.507 ************ 2025-05-19 14:33:47.654689 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:47.654795 | orchestrator | 2025-05-19 14:33:47.655122 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 14:33:47.655893 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.311) 0:00:58.819 ************ 2025-05-19 14:33:47.805698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:47.805817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:47.805890 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:47.806117 | orchestrator | 2025-05-19 14:33:47.806623 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 14:33:47.808062 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.151) 0:00:58.970 ************ 2025-05-19 14:33:47.942683 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:47.942780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:47.943421 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:47.944892 | orchestrator | 2025-05-19 14:33:47.945745 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 14:33:47.946331 | orchestrator | Monday 19 May 2025 14:33:47 +0000 (0:00:00.136) 0:00:59.107 ************ 2025-05-19 14:33:48.102393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:48.103640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:48.104549 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:48.105931 | orchestrator | 2025-05-19 14:33:48.106969 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 14:33:48.108162 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.159) 0:00:59.266 ************ 2025-05-19 14:33:48.229024 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:48.230175 | orchestrator | 2025-05-19 14:33:48.230901 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 14:33:48.231805 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.126) 0:00:59.393 ************ 2025-05-19 14:33:48.359839 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:48.360615 | orchestrator | 2025-05-19 14:33:48.361609 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 14:33:48.363536 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.130) 0:00:59.524 ************ 2025-05-19 14:33:48.481046 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:48.481151 | orchestrator | 2025-05-19 14:33:48.481752 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 14:33:48.482221 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.121) 0:00:59.645 ************ 2025-05-19 14:33:48.613287 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:33:48.613562 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 14:33:48.614372 | orchestrator | } 2025-05-19 14:33:48.615389 | orchestrator | 2025-05-19 14:33:48.617305 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 14:33:48.617959 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.131) 0:00:59.777 ************ 2025-05-19 14:33:48.753436 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:33:48.753709 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 14:33:48.754327 | orchestrator | } 2025-05-19 14:33:48.755223 | orchestrator | 2025-05-19 14:33:48.756887 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 14:33:48.756910 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.139) 0:00:59.917 ************ 2025-05-19 14:33:48.884394 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:33:48.885742 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 14:33:48.886151 | orchestrator | } 2025-05-19 14:33:48.887981 | orchestrator | 2025-05-19 14:33:48.888006 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 14:33:48.888755 | orchestrator | Monday 19 May 2025 14:33:48 +0000 (0:00:00.130) 0:01:00.047 ************ 2025-05-19 14:33:49.384782 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:49.385272 | orchestrator | 2025-05-19 14:33:49.386791 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 14:33:49.386835 | orchestrator | Monday 19 May 2025 14:33:49 +0000 (0:00:00.499) 0:01:00.547 ************ 2025-05-19 14:33:49.890272 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:49.892371 | orchestrator | 2025-05-19 14:33:49.893220 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 14:33:49.893897 | orchestrator | Monday 19 May 2025 14:33:49 +0000 (0:00:00.505) 0:01:01.052 ************ 2025-05-19 14:33:50.573987 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:50.574894 | orchestrator | 2025-05-19 14:33:50.575828 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 14:33:50.576469 | orchestrator | Monday 19 May 2025 14:33:50 +0000 (0:00:00.683) 0:01:01.736 ************ 2025-05-19 14:33:50.714334 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:50.714981 | orchestrator | 2025-05-19 14:33:50.716033 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 14:33:50.718533 | orchestrator | Monday 19 May 2025 14:33:50 +0000 (0:00:00.141) 0:01:01.878 ************ 2025-05-19 14:33:50.829901 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:50.830859 | orchestrator | 2025-05-19 14:33:50.831877 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 14:33:50.833517 | orchestrator | Monday 19 May 2025 14:33:50 +0000 (0:00:00.115) 0:01:01.994 ************ 2025-05-19 14:33:50.936452 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:50.937027 | orchestrator | 2025-05-19 14:33:50.938317 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 14:33:50.938581 | orchestrator | Monday 19 May 2025 14:33:50 +0000 (0:00:00.106) 0:01:02.100 ************ 2025-05-19 14:33:51.084448 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:33:51.086152 | orchestrator |  "vgs_report": { 2025-05-19 14:33:51.086793 | orchestrator |  "vg": [] 2025-05-19 14:33:51.088117 | orchestrator |  } 2025-05-19 14:33:51.089210 | orchestrator | } 2025-05-19 14:33:51.089815 | orchestrator | 2025-05-19 14:33:51.090752 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 14:33:51.091335 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.147) 0:01:02.247 ************ 2025-05-19 14:33:51.211308 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.212342 | orchestrator | 2025-05-19 14:33:51.213053 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 14:33:51.213999 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.127) 0:01:02.374 ************ 2025-05-19 14:33:51.349022 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.350646 | orchestrator | 2025-05-19 14:33:51.351529 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 14:33:51.352670 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.138) 0:01:02.512 ************ 2025-05-19 14:33:51.502340 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.502600 | orchestrator | 2025-05-19 14:33:51.504734 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 14:33:51.504774 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.151) 0:01:02.664 ************ 2025-05-19 14:33:51.637084 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.637372 | orchestrator | 2025-05-19 14:33:51.638186 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 14:33:51.640550 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.135) 0:01:02.800 ************ 2025-05-19 14:33:51.769671 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.769775 | orchestrator | 2025-05-19 14:33:51.769962 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 14:33:51.770305 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.133) 0:01:02.933 ************ 2025-05-19 14:33:51.900431 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:51.901516 | orchestrator | 2025-05-19 14:33:51.902201 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 14:33:51.903407 | orchestrator | Monday 19 May 2025 14:33:51 +0000 (0:00:00.131) 0:01:03.065 ************ 2025-05-19 14:33:52.030907 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.031006 | orchestrator | 2025-05-19 14:33:52.031910 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 14:33:52.033866 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.129) 0:01:03.194 ************ 2025-05-19 14:33:52.153347 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.153482 | orchestrator | 2025-05-19 14:33:52.154171 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 14:33:52.154949 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.121) 0:01:03.316 ************ 2025-05-19 14:33:52.458073 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.458994 | orchestrator | 2025-05-19 14:33:52.460716 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 14:33:52.460767 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.303) 0:01:03.620 ************ 2025-05-19 14:33:52.592968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.593916 | orchestrator | 2025-05-19 14:33:52.594939 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 14:33:52.595849 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.137) 0:01:03.757 ************ 2025-05-19 14:33:52.716675 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.717582 | orchestrator | 2025-05-19 14:33:52.719801 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 14:33:52.720800 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.123) 0:01:03.881 ************ 2025-05-19 14:33:52.848057 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.848605 | orchestrator | 2025-05-19 14:33:52.849211 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 14:33:52.850586 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.130) 0:01:04.011 ************ 2025-05-19 14:33:52.980695 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:52.981359 | orchestrator | 2025-05-19 14:33:52.982465 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 14:33:52.983849 | orchestrator | Monday 19 May 2025 14:33:52 +0000 (0:00:00.132) 0:01:04.144 ************ 2025-05-19 14:33:53.110114 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.111195 | orchestrator | 2025-05-19 14:33:53.111892 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 14:33:53.112596 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.129) 0:01:04.273 ************ 2025-05-19 14:33:53.263310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:53.263552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:53.266006 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.266630 | orchestrator | 2025-05-19 14:33:53.267559 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 14:33:53.268181 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.153) 0:01:04.426 ************ 2025-05-19 14:33:53.414752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:53.415193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:53.415885 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.416443 | orchestrator | 2025-05-19 14:33:53.417027 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 14:33:53.417682 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.148) 0:01:04.575 ************ 2025-05-19 14:33:53.560750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:53.561274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:53.562630 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.563566 | orchestrator | 2025-05-19 14:33:53.564536 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 14:33:53.565205 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.148) 0:01:04.724 ************ 2025-05-19 14:33:53.707766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:53.707968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:53.708616 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.710130 | orchestrator | 2025-05-19 14:33:53.710856 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 14:33:53.711732 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.148) 0:01:04.872 ************ 2025-05-19 14:33:53.861140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:53.863091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:53.863920 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:53.866202 | orchestrator | 2025-05-19 14:33:53.867669 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 14:33:53.867980 | orchestrator | Monday 19 May 2025 14:33:53 +0000 (0:00:00.152) 0:01:05.024 ************ 2025-05-19 14:33:54.023974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:54.024161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:54.024708 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:54.025290 | orchestrator | 2025-05-19 14:33:54.026521 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 14:33:54.027416 | orchestrator | Monday 19 May 2025 14:33:54 +0000 (0:00:00.163) 0:01:05.188 ************ 2025-05-19 14:33:54.405618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:54.408354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:54.408436 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:54.408959 | orchestrator | 2025-05-19 14:33:54.409958 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 14:33:54.410350 | orchestrator | Monday 19 May 2025 14:33:54 +0000 (0:00:00.378) 0:01:05.567 ************ 2025-05-19 14:33:54.540054 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:54.540334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:54.541015 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:54.541526 | orchestrator | 2025-05-19 14:33:54.542537 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 14:33:54.543143 | orchestrator | Monday 19 May 2025 14:33:54 +0000 (0:00:00.137) 0:01:05.704 ************ 2025-05-19 14:33:55.057921 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:55.058218 | orchestrator | 2025-05-19 14:33:55.058348 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 14:33:55.058368 | orchestrator | Monday 19 May 2025 14:33:55 +0000 (0:00:00.517) 0:01:06.222 ************ 2025-05-19 14:33:55.584409 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:55.585048 | orchestrator | 2025-05-19 14:33:55.586200 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 14:33:55.588055 | orchestrator | Monday 19 May 2025 14:33:55 +0000 (0:00:00.525) 0:01:06.748 ************ 2025-05-19 14:33:55.746552 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:33:55.746643 | orchestrator | 2025-05-19 14:33:55.747450 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 14:33:55.748311 | orchestrator | Monday 19 May 2025 14:33:55 +0000 (0:00:00.161) 0:01:06.909 ************ 2025-05-19 14:33:55.921027 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'vg_name': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'}) 2025-05-19 14:33:55.921701 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'vg_name': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'}) 2025-05-19 14:33:55.922680 | orchestrator | 2025-05-19 14:33:55.923306 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 14:33:55.924562 | orchestrator | Monday 19 May 2025 14:33:55 +0000 (0:00:00.174) 0:01:07.084 ************ 2025-05-19 14:33:56.077666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:56.077764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:56.077779 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:56.080777 | orchestrator | 2025-05-19 14:33:56.082824 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 14:33:56.083139 | orchestrator | Monday 19 May 2025 14:33:56 +0000 (0:00:00.154) 0:01:07.239 ************ 2025-05-19 14:33:56.226709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:56.226884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:56.227810 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:56.228367 | orchestrator | 2025-05-19 14:33:56.229056 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 14:33:56.229621 | orchestrator | Monday 19 May 2025 14:33:56 +0000 (0:00:00.151) 0:01:07.390 ************ 2025-05-19 14:33:56.372182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'})  2025-05-19 14:33:56.372382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'})  2025-05-19 14:33:56.372931 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:33:56.376273 | orchestrator | 2025-05-19 14:33:56.376298 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 14:33:56.377362 | orchestrator | Monday 19 May 2025 14:33:56 +0000 (0:00:00.146) 0:01:07.537 ************ 2025-05-19 14:33:56.525837 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:33:56.526392 | orchestrator |  "lvm_report": { 2025-05-19 14:33:56.527316 | orchestrator |  "lv": [ 2025-05-19 14:33:56.527602 | orchestrator |  { 2025-05-19 14:33:56.528479 | orchestrator |  "lv_name": "osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76", 2025-05-19 14:33:56.529279 | orchestrator |  "vg_name": "ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76" 2025-05-19 14:33:56.530098 | orchestrator |  }, 2025-05-19 14:33:56.530875 | orchestrator |  { 2025-05-19 14:33:56.531480 | orchestrator |  "lv_name": "osd-block-ad566f4e-67fb-565a-8346-037c8100dc24", 2025-05-19 14:33:56.532139 | orchestrator |  "vg_name": "ceph-ad566f4e-67fb-565a-8346-037c8100dc24" 2025-05-19 14:33:56.532601 | orchestrator |  } 2025-05-19 14:33:56.533543 | orchestrator |  ], 2025-05-19 14:33:56.534254 | orchestrator |  "pv": [ 2025-05-19 14:33:56.534950 | orchestrator |  { 2025-05-19 14:33:56.535703 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 14:33:56.536708 | orchestrator |  "vg_name": "ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76" 2025-05-19 14:33:56.536797 | orchestrator |  }, 2025-05-19 14:33:56.537678 | orchestrator |  { 2025-05-19 14:33:56.538122 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 14:33:56.538967 | orchestrator |  "vg_name": "ceph-ad566f4e-67fb-565a-8346-037c8100dc24" 2025-05-19 14:33:56.539931 | orchestrator |  } 2025-05-19 14:33:56.540424 | orchestrator |  ] 2025-05-19 14:33:56.541703 | orchestrator |  } 2025-05-19 14:33:56.541806 | orchestrator | } 2025-05-19 14:33:56.542872 | orchestrator | 2025-05-19 14:33:56.544098 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:33:56.544155 | orchestrator | 2025-05-19 14:33:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:33:56.544172 | orchestrator | 2025-05-19 14:33:56 | INFO  | Please wait and do not abort execution. 2025-05-19 14:33:56.544973 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 14:33:56.545511 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 14:33:56.546105 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 14:33:56.546731 | orchestrator | 2025-05-19 14:33:56.547445 | orchestrator | 2025-05-19 14:33:56.547947 | orchestrator | 2025-05-19 14:33:56.548713 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:33:56.549280 | orchestrator | Monday 19 May 2025 14:33:56 +0000 (0:00:00.152) 0:01:07.689 ************ 2025-05-19 14:33:56.549870 | orchestrator | =============================================================================== 2025-05-19 14:33:56.550598 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2025-05-19 14:33:56.551447 | orchestrator | Create block LVs -------------------------------------------------------- 3.96s 2025-05-19 14:33:56.551810 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2025-05-19 14:33:56.552245 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2025-05-19 14:33:56.553100 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-05-19 14:33:56.554146 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-19 14:33:56.554380 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-05-19 14:33:56.555220 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2025-05-19 14:33:56.555591 | orchestrator | Add known links to the list of available block devices ------------------ 1.14s 2025-05-19 14:33:56.556111 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-05-19 14:33:56.556678 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-05-19 14:33:56.557203 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-05-19 14:33:56.557528 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2025-05-19 14:33:56.558188 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-19 14:33:56.558633 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-05-19 14:33:56.559027 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.66s 2025-05-19 14:33:56.559331 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.65s 2025-05-19 14:33:56.559967 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2025-05-19 14:33:56.560415 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-19 14:33:56.560741 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-19 14:33:58.865411 | orchestrator | 2025-05-19 14:33:58 | INFO  | Task 7b75d8d8-7bc9-41a9-aa57-f826fe9bc2cd (facts) was prepared for execution. 2025-05-19 14:33:58.865584 | orchestrator | 2025-05-19 14:33:58 | INFO  | It takes a moment until task 7b75d8d8-7bc9-41a9-aa57-f826fe9bc2cd (facts) has been started and output is visible here. 2025-05-19 14:34:02.894925 | orchestrator | 2025-05-19 14:34:02.895035 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 14:34:02.895384 | orchestrator | 2025-05-19 14:34:02.897764 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 14:34:02.899029 | orchestrator | Monday 19 May 2025 14:34:02 +0000 (0:00:00.259) 0:00:00.259 ************ 2025-05-19 14:34:03.963438 | orchestrator | ok: [testbed-manager] 2025-05-19 14:34:03.964191 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:34:03.968867 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:34:03.969880 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:34:03.970446 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:34:03.970890 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:34:03.971560 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:34:03.972592 | orchestrator | 2025-05-19 14:34:03.973093 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 14:34:03.973966 | orchestrator | Monday 19 May 2025 14:34:03 +0000 (0:00:01.069) 0:00:01.329 ************ 2025-05-19 14:34:04.127328 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:34:04.206116 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:34:04.285231 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:34:04.365330 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:34:04.451866 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:34:05.183724 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:34:05.188246 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:34:05.188503 | orchestrator | 2025-05-19 14:34:05.189728 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 14:34:05.192063 | orchestrator | 2025-05-19 14:34:05.193431 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 14:34:05.194620 | orchestrator | Monday 19 May 2025 14:34:05 +0000 (0:00:01.221) 0:00:02.551 ************ 2025-05-19 14:34:09.928300 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:34:09.928470 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:34:09.932978 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:34:09.933033 | orchestrator | ok: [testbed-manager] 2025-05-19 14:34:09.933045 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:34:09.933057 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:34:09.933988 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:34:09.934296 | orchestrator | 2025-05-19 14:34:09.935961 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 14:34:09.936945 | orchestrator | 2025-05-19 14:34:09.938180 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 14:34:09.938666 | orchestrator | Monday 19 May 2025 14:34:09 +0000 (0:00:04.746) 0:00:07.297 ************ 2025-05-19 14:34:10.085611 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:34:10.157364 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:34:10.234516 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:34:10.315810 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:34:10.390865 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:34:10.430453 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:34:10.431501 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:34:10.432154 | orchestrator | 2025-05-19 14:34:10.433094 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:34:10.433789 | orchestrator | 2025-05-19 14:34:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 14:34:10.434497 | orchestrator | 2025-05-19 14:34:10 | INFO  | Please wait and do not abort execution. 2025-05-19 14:34:10.434890 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.435607 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.438000 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.438114 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.439234 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.439939 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.441147 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:34:10.441910 | orchestrator | 2025-05-19 14:34:10.442563 | orchestrator | 2025-05-19 14:34:10.443697 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:34:10.444120 | orchestrator | Monday 19 May 2025 14:34:10 +0000 (0:00:00.504) 0:00:07.802 ************ 2025-05-19 14:34:10.444694 | orchestrator | =============================================================================== 2025-05-19 14:34:10.445459 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2025-05-19 14:34:10.446155 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-05-19 14:34:10.446838 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.07s 2025-05-19 14:34:10.447514 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-19 14:34:11.030469 | orchestrator | 2025-05-19 14:34:11.031926 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon May 19 14:34:11 UTC 2025 2025-05-19 14:34:11.031956 | orchestrator | 2025-05-19 14:34:12.641690 | orchestrator | 2025-05-19 14:34:12 | INFO  | Collection nutshell is prepared for execution 2025-05-19 14:34:12.641807 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [0] - dotfiles 2025-05-19 14:34:12.647462 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [0] - homer 2025-05-19 14:34:12.647585 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [0] - netdata 2025-05-19 14:34:12.647601 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [0] - openstackclient 2025-05-19 14:34:12.647613 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [0] - phpmyadmin 2025-05-19 14:34:12.647624 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [0] - common 2025-05-19 14:34:12.648803 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [1] -- loadbalancer 2025-05-19 14:34:12.648829 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [2] --- opensearch 2025-05-19 14:34:12.648987 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [2] --- mariadb-ng 2025-05-19 14:34:12.649032 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [3] ---- horizon 2025-05-19 14:34:12.649046 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [3] ---- keystone 2025-05-19 14:34:12.649104 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [4] ----- neutron 2025-05-19 14:34:12.649119 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ wait-for-nova 2025-05-19 14:34:12.649290 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [5] ------ octavia 2025-05-19 14:34:12.649819 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- barbican 2025-05-19 14:34:12.649845 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- designate 2025-05-19 14:34:12.649904 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- ironic 2025-05-19 14:34:12.650010 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- placement 2025-05-19 14:34:12.650098 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- magnum 2025-05-19 14:34:12.650324 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [1] -- openvswitch 2025-05-19 14:34:12.650555 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [2] --- ovn 2025-05-19 14:34:12.650578 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [1] -- memcached 2025-05-19 14:34:12.650753 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [1] -- redis 2025-05-19 14:34:12.650801 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [1] -- rabbitmq-ng 2025-05-19 14:34:12.650860 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [0] - kubernetes 2025-05-19 14:34:12.652556 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [1] -- kubeconfig 2025-05-19 14:34:12.652581 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [1] -- copy-kubeconfig 2025-05-19 14:34:12.652635 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [0] - ceph 2025-05-19 14:34:12.654143 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [1] -- ceph-pools 2025-05-19 14:34:12.654179 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [2] --- copy-ceph-keys 2025-05-19 14:34:12.654284 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [3] ---- cephclient 2025-05-19 14:34:12.654301 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-19 14:34:12.654312 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [4] ----- wait-for-keystone 2025-05-19 14:34:12.654323 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-19 14:34:12.654583 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ glance 2025-05-19 14:34:12.654606 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ cinder 2025-05-19 14:34:12.654680 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ nova 2025-05-19 14:34:12.654768 | orchestrator | 2025-05-19 14:34:12 | INFO  | A [4] ----- prometheus 2025-05-19 14:34:12.654785 | orchestrator | 2025-05-19 14:34:12 | INFO  | D [5] ------ grafana 2025-05-19 14:34:12.823290 | orchestrator | 2025-05-19 14:34:12 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-19 14:34:12.823383 | orchestrator | 2025-05-19 14:34:12 | INFO  | Tasks are running in the background 2025-05-19 14:34:15.410157 | orchestrator | 2025-05-19 14:34:15 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-19 14:34:17.569458 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:17.571639 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:17.571852 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:17.572349 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:17.572850 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:17.573364 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:17.573879 | orchestrator | 2025-05-19 14:34:17 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:17.573950 | orchestrator | 2025-05-19 14:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:20.625555 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:20.625793 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:20.627322 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:20.627723 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:20.628309 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:20.628915 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:20.629452 | orchestrator | 2025-05-19 14:34:20 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:20.629555 | orchestrator | 2025-05-19 14:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:23.677082 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:23.677309 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:23.677827 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:23.678368 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:23.680513 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:23.680939 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:23.681650 | orchestrator | 2025-05-19 14:34:23 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:23.682603 | orchestrator | 2025-05-19 14:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:26.734428 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:26.738539 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:26.743554 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:26.743584 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:26.743592 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:26.744971 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:26.751296 | orchestrator | 2025-05-19 14:34:26 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:26.754524 | orchestrator | 2025-05-19 14:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:29.811404 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:29.813686 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:29.814136 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:29.815041 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:29.816947 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:29.818839 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:29.819261 | orchestrator | 2025-05-19 14:34:29 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:29.819331 | orchestrator | 2025-05-19 14:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:32.897966 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:32.898087 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:32.898101 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:32.898110 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:32.898119 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:32.898127 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:32.898303 | orchestrator | 2025-05-19 14:34:32 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:32.898334 | orchestrator | 2025-05-19 14:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:35.962278 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:35.963620 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:35.975518 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:35.987018 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:35.987105 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:35.987120 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:35.987377 | orchestrator | 2025-05-19 14:34:35 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:35.988060 | orchestrator | 2025-05-19 14:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:39.059578 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:39.059668 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:39.061784 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:39.063201 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:39.063745 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:39.065449 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state STARTED 2025-05-19 14:34:39.069488 | orchestrator | 2025-05-19 14:34:39 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:39.069510 | orchestrator | 2025-05-19 14:34:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:42.121034 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:42.121172 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:42.124577 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:42.125233 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:42.126956 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:42.129982 | orchestrator | 2025-05-19 14:34:42.130063 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-19 14:34:42.130078 | orchestrator | 2025-05-19 14:34:42.130089 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-19 14:34:42.130099 | orchestrator | Monday 19 May 2025 14:34:26 +0000 (0:00:01.060) 0:00:01.060 ************ 2025-05-19 14:34:42.130110 | orchestrator | changed: [testbed-manager] 2025-05-19 14:34:42.130123 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:34:42.130133 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:34:42.130144 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:34:42.130155 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:34:42.130165 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:34:42.130175 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:34:42.130186 | orchestrator | 2025-05-19 14:34:42.130196 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-19 14:34:42.130207 | orchestrator | Monday 19 May 2025 14:34:30 +0000 (0:00:04.442) 0:00:05.503 ************ 2025-05-19 14:34:42.130218 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-19 14:34:42.130229 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 14:34:42.130239 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 14:34:42.130250 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 14:34:42.130260 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 14:34:42.130270 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 14:34:42.130281 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 14:34:42.130291 | orchestrator | 2025-05-19 14:34:42.130302 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-19 14:34:42.130313 | orchestrator | Monday 19 May 2025 14:34:32 +0000 (0:00:01.580) 0:00:07.083 ************ 2025-05-19 14:34:42.130329 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:31.464622', 'end': '2025-05-19 14:34:31.468041', 'delta': '0:00:00.003419', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130363 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:31.556303', 'end': '2025-05-19 14:34:31.562912', 'delta': '0:00:00.006609', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130424 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:31.576303', 'end': '2025-05-19 14:34:31.585529', 'delta': '0:00:00.009226', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130533 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:31.681278', 'end': '2025-05-19 14:34:31.691241', 'delta': '0:00:00.009963', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130551 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:31.756199', 'end': '2025-05-19 14:34:31.767820', 'delta': '0:00:00.011621', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130562 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:32.033230', 'end': '2025-05-19 14:34:32.041013', 'delta': '0:00:00.007783', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130579 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 14:34:32.057108', 'end': '2025-05-19 14:34:32.063633', 'delta': '0:00:00.006525', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 14:34:42.130603 | orchestrator | 2025-05-19 14:34:42.130616 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-19 14:34:42.130628 | orchestrator | Monday 19 May 2025 14:34:34 +0000 (0:00:02.614) 0:00:09.698 ************ 2025-05-19 14:34:42.130640 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-19 14:34:42.130653 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 14:34:42.130665 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 14:34:42.130676 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 14:34:42.130689 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 14:34:42.130701 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 14:34:42.130718 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 14:34:42.130738 | orchestrator | 2025-05-19 14:34:42.130759 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-19 14:34:42.130779 | orchestrator | Monday 19 May 2025 14:34:36 +0000 (0:00:01.937) 0:00:11.636 ************ 2025-05-19 14:34:42.130800 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-19 14:34:42.130820 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 14:34:42.130837 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 14:34:42.130850 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 14:34:42.130864 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 14:34:42.130876 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 14:34:42.130888 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 14:34:42.130900 | orchestrator | 2025-05-19 14:34:42.130913 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:34:42.130935 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.130950 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.130961 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.130973 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.130983 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.130994 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.131005 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:34:42.131015 | orchestrator | 2025-05-19 14:34:42.131026 | orchestrator | 2025-05-19 14:34:42.131037 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:34:42.131048 | orchestrator | Monday 19 May 2025 14:34:40 +0000 (0:00:03.761) 0:00:15.397 ************ 2025-05-19 14:34:42.131058 | orchestrator | =============================================================================== 2025-05-19 14:34:42.131069 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.44s 2025-05-19 14:34:42.131080 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.76s 2025-05-19 14:34:42.131090 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.61s 2025-05-19 14:34:42.131109 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.94s 2025-05-19 14:34:42.131120 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.58s 2025-05-19 14:34:42.131158 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:42.131171 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task 1e29343e-cc32-46a6-b095-e40b8bc46634 is in state SUCCESS 2025-05-19 14:34:42.131182 | orchestrator | 2025-05-19 14:34:42 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:42.131262 | orchestrator | 2025-05-19 14:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:45.186724 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:45.186988 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:45.187544 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:45.188252 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:45.189801 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:45.190280 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:45.193975 | orchestrator | 2025-05-19 14:34:45 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:45.194071 | orchestrator | 2025-05-19 14:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:48.243161 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:48.243315 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:48.245225 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:48.245507 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:48.246190 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:48.247663 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:48.248817 | orchestrator | 2025-05-19 14:34:48 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:48.248898 | orchestrator | 2025-05-19 14:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:51.306818 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:51.307516 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:51.309967 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:51.311161 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:51.316084 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:51.320333 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:51.326851 | orchestrator | 2025-05-19 14:34:51 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:51.327040 | orchestrator | 2025-05-19 14:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:54.385197 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:54.392490 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:54.396860 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:54.400112 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:54.403491 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:54.406332 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:54.411228 | orchestrator | 2025-05-19 14:34:54 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:54.412300 | orchestrator | 2025-05-19 14:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:34:57.493394 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:34:57.493632 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:34:57.495495 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:34:57.496138 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:34:57.496481 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state STARTED 2025-05-19 14:34:57.501938 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:34:57.504466 | orchestrator | 2025-05-19 14:34:57 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:34:57.504490 | orchestrator | 2025-05-19 14:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:00.547633 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:00.547720 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:00.548798 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:00.548830 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:35:00.550608 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task b0787da4-b80a-4e56-9c39-2f2528f2d1d0 is in state SUCCESS 2025-05-19 14:35:00.550651 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:00.550672 | orchestrator | 2025-05-19 14:35:00 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:00.550693 | orchestrator | 2025-05-19 14:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:03.596174 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:03.596278 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:03.596296 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:03.598083 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:35:03.598114 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:03.598126 | orchestrator | 2025-05-19 14:35:03 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:03.598138 | orchestrator | 2025-05-19 14:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:06.644003 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:06.644947 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:06.645701 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:06.647609 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:35:06.647654 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:06.649742 | orchestrator | 2025-05-19 14:35:06 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:06.652524 | orchestrator | 2025-05-19 14:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:09.698405 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:09.703766 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:09.703820 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:09.703832 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state STARTED 2025-05-19 14:35:09.703844 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:09.704342 | orchestrator | 2025-05-19 14:35:09 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:09.704365 | orchestrator | 2025-05-19 14:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:12.751809 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:12.752807 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:12.756780 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:12.756815 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task b5afd811-e865-4af3-aa7f-f2cd54f60b66 is in state SUCCESS 2025-05-19 14:35:12.757855 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:12.759555 | orchestrator | 2025-05-19 14:35:12 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:12.759602 | orchestrator | 2025-05-19 14:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:15.808744 | orchestrator | 2025-05-19 14:35:15 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:15.809690 | orchestrator | 2025-05-19 14:35:15 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:15.810362 | orchestrator | 2025-05-19 14:35:15 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:15.810915 | orchestrator | 2025-05-19 14:35:15 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:15.811610 | orchestrator | 2025-05-19 14:35:15 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:15.812185 | orchestrator | 2025-05-19 14:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:18.869358 | orchestrator | 2025-05-19 14:35:18 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:18.883261 | orchestrator | 2025-05-19 14:35:18 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:18.888166 | orchestrator | 2025-05-19 14:35:18 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:18.894469 | orchestrator | 2025-05-19 14:35:18 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:18.894531 | orchestrator | 2025-05-19 14:35:18 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:18.894544 | orchestrator | 2025-05-19 14:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:21.971320 | orchestrator | 2025-05-19 14:35:21 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state STARTED 2025-05-19 14:35:21.973134 | orchestrator | 2025-05-19 14:35:21 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:21.974692 | orchestrator | 2025-05-19 14:35:21 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:21.975849 | orchestrator | 2025-05-19 14:35:21 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:21.977886 | orchestrator | 2025-05-19 14:35:21 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:21.977916 | orchestrator | 2025-05-19 14:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:25.035145 | orchestrator | 2025-05-19 14:35:25.035245 | orchestrator | 2025-05-19 14:35:25.035261 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-19 14:35:25.035274 | orchestrator | 2025-05-19 14:35:25.035285 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-19 14:35:25.035298 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.620) 0:00:00.620 ************ 2025-05-19 14:35:25.035309 | orchestrator | ok: [testbed-manager] => { 2025-05-19 14:35:25.035323 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-19 14:35:25.035336 | orchestrator | } 2025-05-19 14:35:25.035348 | orchestrator | 2025-05-19 14:35:25.035359 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-19 14:35:25.035370 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.419) 0:00:01.039 ************ 2025-05-19 14:35:25.035381 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.035393 | orchestrator | 2025-05-19 14:35:25.035404 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-19 14:35:25.035415 | orchestrator | Monday 19 May 2025 14:34:27 +0000 (0:00:01.682) 0:00:02.722 ************ 2025-05-19 14:35:25.035460 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-19 14:35:25.035480 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-19 14:35:25.035498 | orchestrator | 2025-05-19 14:35:25.035524 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-19 14:35:25.035543 | orchestrator | Monday 19 May 2025 14:34:28 +0000 (0:00:01.237) 0:00:03.959 ************ 2025-05-19 14:35:25.035561 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.035577 | orchestrator | 2025-05-19 14:35:25.035589 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-19 14:35:25.035620 | orchestrator | Monday 19 May 2025 14:34:31 +0000 (0:00:02.495) 0:00:06.454 ************ 2025-05-19 14:35:25.035632 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.035642 | orchestrator | 2025-05-19 14:35:25.035653 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-19 14:35:25.035663 | orchestrator | Monday 19 May 2025 14:34:32 +0000 (0:00:01.528) 0:00:07.983 ************ 2025-05-19 14:35:25.035674 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-19 14:35:25.035685 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.035698 | orchestrator | 2025-05-19 14:35:25.035710 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-19 14:35:25.035722 | orchestrator | Monday 19 May 2025 14:34:56 +0000 (0:00:23.792) 0:00:31.775 ************ 2025-05-19 14:35:25.035734 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.035746 | orchestrator | 2025-05-19 14:35:25.035758 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:35:25.035771 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.035786 | orchestrator | 2025-05-19 14:35:25.035798 | orchestrator | 2025-05-19 14:35:25.035810 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:35:25.035823 | orchestrator | Monday 19 May 2025 14:34:57 +0000 (0:00:01.447) 0:00:33.223 ************ 2025-05-19 14:35:25.035835 | orchestrator | =============================================================================== 2025-05-19 14:35:25.035847 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.79s 2025-05-19 14:35:25.035859 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.50s 2025-05-19 14:35:25.035872 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.68s 2025-05-19 14:35:25.035884 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.53s 2025-05-19 14:35:25.035896 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.45s 2025-05-19 14:35:25.035908 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.24s 2025-05-19 14:35:25.035920 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2025-05-19 14:35:25.035932 | orchestrator | 2025-05-19 14:35:25.035944 | orchestrator | 2025-05-19 14:35:25.035957 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-19 14:35:25.035968 | orchestrator | 2025-05-19 14:35:25.035980 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-19 14:35:25.035993 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.814) 0:00:00.814 ************ 2025-05-19 14:35:25.036005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-19 14:35:25.036019 | orchestrator | 2025-05-19 14:35:25.036031 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-19 14:35:25.036043 | orchestrator | Monday 19 May 2025 14:34:26 +0000 (0:00:00.665) 0:00:01.479 ************ 2025-05-19 14:35:25.036056 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-19 14:35:25.036068 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-19 14:35:25.036080 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-19 14:35:25.036091 | orchestrator | 2025-05-19 14:35:25.036102 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-19 14:35:25.036112 | orchestrator | Monday 19 May 2025 14:34:27 +0000 (0:00:01.638) 0:00:03.118 ************ 2025-05-19 14:35:25.036123 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.036134 | orchestrator | 2025-05-19 14:35:25.036144 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-19 14:35:25.036162 | orchestrator | Monday 19 May 2025 14:34:29 +0000 (0:00:01.983) 0:00:05.101 ************ 2025-05-19 14:35:25.036192 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-19 14:35:25.036204 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.036214 | orchestrator | 2025-05-19 14:35:25.036225 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-19 14:35:25.036236 | orchestrator | Monday 19 May 2025 14:35:04 +0000 (0:00:34.274) 0:00:39.376 ************ 2025-05-19 14:35:25.036247 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.036257 | orchestrator | 2025-05-19 14:35:25.036268 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-19 14:35:25.036278 | orchestrator | Monday 19 May 2025 14:35:04 +0000 (0:00:00.847) 0:00:40.224 ************ 2025-05-19 14:35:25.036289 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.036300 | orchestrator | 2025-05-19 14:35:25.036310 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-19 14:35:25.036321 | orchestrator | Monday 19 May 2025 14:35:05 +0000 (0:00:00.966) 0:00:41.191 ************ 2025-05-19 14:35:25.036332 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.036342 | orchestrator | 2025-05-19 14:35:25.036353 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-19 14:35:25.036363 | orchestrator | Monday 19 May 2025 14:35:07 +0000 (0:00:01.867) 0:00:43.058 ************ 2025-05-19 14:35:25.036374 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.036385 | orchestrator | 2025-05-19 14:35:25.036395 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-19 14:35:25.036410 | orchestrator | Monday 19 May 2025 14:35:08 +0000 (0:00:00.893) 0:00:43.952 ************ 2025-05-19 14:35:25.036422 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.036452 | orchestrator | 2025-05-19 14:35:25.036463 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-19 14:35:25.036474 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.498) 0:00:44.450 ************ 2025-05-19 14:35:25.036485 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.036495 | orchestrator | 2025-05-19 14:35:25.036506 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:35:25.036517 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.036527 | orchestrator | 2025-05-19 14:35:25.036538 | orchestrator | 2025-05-19 14:35:25.036548 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:35:25.036559 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.292) 0:00:44.742 ************ 2025-05-19 14:35:25.036570 | orchestrator | =============================================================================== 2025-05-19 14:35:25.036580 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.27s 2025-05-19 14:35:25.036591 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.98s 2025-05-19 14:35:25.036602 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.87s 2025-05-19 14:35:25.036612 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.64s 2025-05-19 14:35:25.036623 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.97s 2025-05-19 14:35:25.036634 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.89s 2025-05-19 14:35:25.036644 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.85s 2025-05-19 14:35:25.036655 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.67s 2025-05-19 14:35:25.036666 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.50s 2025-05-19 14:35:25.036676 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.29s 2025-05-19 14:35:25.036687 | orchestrator | 2025-05-19 14:35:25.036697 | orchestrator | 2025-05-19 14:35:25.036708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:35:25.036725 | orchestrator | 2025-05-19 14:35:25.036736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:35:25.036747 | orchestrator | Monday 19 May 2025 14:34:24 +0000 (0:00:00.712) 0:00:00.712 ************ 2025-05-19 14:35:25.036758 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-19 14:35:25.036768 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-19 14:35:25.036779 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-19 14:35:25.036790 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-19 14:35:25.036800 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-19 14:35:25.036811 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-19 14:35:25.036822 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-19 14:35:25.036832 | orchestrator | 2025-05-19 14:35:25.036843 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-19 14:35:25.036853 | orchestrator | 2025-05-19 14:35:25.036864 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-19 14:35:25.036875 | orchestrator | Monday 19 May 2025 14:34:27 +0000 (0:00:02.582) 0:00:03.294 ************ 2025-05-19 14:35:25.036899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:35:25.036913 | orchestrator | 2025-05-19 14:35:25.036924 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-19 14:35:25.036934 | orchestrator | Monday 19 May 2025 14:34:29 +0000 (0:00:02.484) 0:00:05.779 ************ 2025-05-19 14:35:25.036945 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.036956 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:35:25.036967 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:35:25.036978 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:35:25.036989 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:35:25.037005 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:35:25.037017 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:35:25.037027 | orchestrator | 2025-05-19 14:35:25.037038 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-19 14:35:25.037049 | orchestrator | Monday 19 May 2025 14:34:31 +0000 (0:00:01.981) 0:00:07.761 ************ 2025-05-19 14:35:25.037060 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.037071 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:35:25.037081 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:35:25.037092 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:35:25.037103 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:35:25.037113 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:35:25.037124 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:35:25.037135 | orchestrator | 2025-05-19 14:35:25.037146 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-19 14:35:25.037157 | orchestrator | Monday 19 May 2025 14:34:35 +0000 (0:00:04.104) 0:00:11.865 ************ 2025-05-19 14:35:25.037168 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:35:25.037179 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:35:25.037190 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.037200 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:35:25.037211 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:35:25.037222 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:35:25.037233 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:35:25.037244 | orchestrator | 2025-05-19 14:35:25.037254 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-19 14:35:25.037270 | orchestrator | Monday 19 May 2025 14:34:39 +0000 (0:00:03.109) 0:00:14.975 ************ 2025-05-19 14:35:25.037281 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.037292 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:35:25.037314 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:35:25.037325 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:35:25.037335 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:35:25.037346 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:35:25.037357 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:35:25.037367 | orchestrator | 2025-05-19 14:35:25.037378 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-19 14:35:25.037389 | orchestrator | Monday 19 May 2025 14:34:48 +0000 (0:00:09.167) 0:00:24.142 ************ 2025-05-19 14:35:25.037400 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.037411 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:35:25.037421 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:35:25.037479 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:35:25.037490 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:35:25.037500 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:35:25.037511 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:35:25.037521 | orchestrator | 2025-05-19 14:35:25.037532 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-19 14:35:25.037543 | orchestrator | Monday 19 May 2025 14:35:03 +0000 (0:00:15.142) 0:00:39.285 ************ 2025-05-19 14:35:25.037554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:35:25.037567 | orchestrator | 2025-05-19 14:35:25.037577 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-19 14:35:25.037588 | orchestrator | Monday 19 May 2025 14:35:04 +0000 (0:00:01.554) 0:00:40.840 ************ 2025-05-19 14:35:25.037599 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-19 14:35:25.037610 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-19 14:35:25.037620 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-19 14:35:25.037631 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-19 14:35:25.037641 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-19 14:35:25.037652 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-19 14:35:25.037662 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-19 14:35:25.037673 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-19 14:35:25.037684 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-19 14:35:25.037694 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-19 14:35:25.037705 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-19 14:35:25.037715 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-19 14:35:25.037725 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-19 14:35:25.037736 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-19 14:35:25.037746 | orchestrator | 2025-05-19 14:35:25.037757 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-19 14:35:25.037768 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:04.780) 0:00:45.621 ************ 2025-05-19 14:35:25.037779 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.037790 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:35:25.037800 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:35:25.037811 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:35:25.037822 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:35:25.037832 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:35:25.037842 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:35:25.037853 | orchestrator | 2025-05-19 14:35:25.037863 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-19 14:35:25.037874 | orchestrator | Monday 19 May 2025 14:35:11 +0000 (0:00:01.413) 0:00:47.034 ************ 2025-05-19 14:35:25.037884 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.037902 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:35:25.037913 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:35:25.037924 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:35:25.037934 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:35:25.037945 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:35:25.037955 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:35:25.037966 | orchestrator | 2025-05-19 14:35:25.037977 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-19 14:35:25.037994 | orchestrator | Monday 19 May 2025 14:35:12 +0000 (0:00:01.659) 0:00:48.694 ************ 2025-05-19 14:35:25.038006 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.038098 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:35:25.038113 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:35:25.038124 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:35:25.038135 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:35:25.038145 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:35:25.038156 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:35:25.038167 | orchestrator | 2025-05-19 14:35:25.038177 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-19 14:35:25.038188 | orchestrator | Monday 19 May 2025 14:35:14 +0000 (0:00:01.308) 0:00:50.003 ************ 2025-05-19 14:35:25.038199 | orchestrator | ok: [testbed-manager] 2025-05-19 14:35:25.038210 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:35:25.038220 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:35:25.038231 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:35:25.038241 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:35:25.038251 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:35:25.038262 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:35:25.038272 | orchestrator | 2025-05-19 14:35:25.038283 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-19 14:35:25.038294 | orchestrator | Monday 19 May 2025 14:35:15 +0000 (0:00:01.947) 0:00:51.950 ************ 2025-05-19 14:35:25.038305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-19 14:35:25.038322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:35:25.038334 | orchestrator | 2025-05-19 14:35:25.038345 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-19 14:35:25.038355 | orchestrator | Monday 19 May 2025 14:35:17 +0000 (0:00:01.639) 0:00:53.590 ************ 2025-05-19 14:35:25.038366 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.038377 | orchestrator | 2025-05-19 14:35:25.038388 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-19 14:35:25.038398 | orchestrator | Monday 19 May 2025 14:35:20 +0000 (0:00:02.367) 0:00:55.957 ************ 2025-05-19 14:35:25.038409 | orchestrator | changed: [testbed-manager] 2025-05-19 14:35:25.038420 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:35:25.038483 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:35:25.038495 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:35:25.038506 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:35:25.038516 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:35:25.038527 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:35:25.038537 | orchestrator | 2025-05-19 14:35:25.038548 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:35:25.038559 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038571 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038581 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038600 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038611 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038622 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038633 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:35:25.038643 | orchestrator | 2025-05-19 14:35:25.038654 | orchestrator | 2025-05-19 14:35:25.038665 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:35:25.038675 | orchestrator | Monday 19 May 2025 14:35:23 +0000 (0:00:03.704) 0:00:59.661 ************ 2025-05-19 14:35:25.038686 | orchestrator | =============================================================================== 2025-05-19 14:35:25.038697 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.14s 2025-05-19 14:35:25.038708 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.17s 2025-05-19 14:35:25.038718 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.78s 2025-05-19 14:35:25.038729 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.11s 2025-05-19 14:35:25.038739 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.70s 2025-05-19 14:35:25.038750 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.10s 2025-05-19 14:35:25.038761 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.58s 2025-05-19 14:35:25.038771 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.48s 2025-05-19 14:35:25.038782 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.37s 2025-05-19 14:35:25.038792 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.98s 2025-05-19 14:35:25.038803 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.95s 2025-05-19 14:35:25.038822 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.66s 2025-05-19 14:35:25.038834 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.64s 2025-05-19 14:35:25.038845 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.55s 2025-05-19 14:35:25.038855 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.41s 2025-05-19 14:35:25.038866 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.31s 2025-05-19 14:35:25.038877 | orchestrator | 2025-05-19 14:35:25 | INFO  | Task f2725200-6105-46c1-9614-82e189055d5c is in state SUCCESS 2025-05-19 14:35:25.038889 | orchestrator | 2025-05-19 14:35:25 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:25.038900 | orchestrator | 2025-05-19 14:35:25 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:25.038911 | orchestrator | 2025-05-19 14:35:25 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:25.038922 | orchestrator | 2025-05-19 14:35:25 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:25.038937 | orchestrator | 2025-05-19 14:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:28.081400 | orchestrator | 2025-05-19 14:35:28 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:28.083777 | orchestrator | 2025-05-19 14:35:28 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:28.084047 | orchestrator | 2025-05-19 14:35:28 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:28.086565 | orchestrator | 2025-05-19 14:35:28 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:28.086785 | orchestrator | 2025-05-19 14:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:31.133165 | orchestrator | 2025-05-19 14:35:31 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:31.136624 | orchestrator | 2025-05-19 14:35:31 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:31.139203 | orchestrator | 2025-05-19 14:35:31 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:31.141068 | orchestrator | 2025-05-19 14:35:31 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:31.141583 | orchestrator | 2025-05-19 14:35:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:34.182992 | orchestrator | 2025-05-19 14:35:34 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:34.184097 | orchestrator | 2025-05-19 14:35:34 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:34.185797 | orchestrator | 2025-05-19 14:35:34 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:34.187051 | orchestrator | 2025-05-19 14:35:34 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:34.187087 | orchestrator | 2025-05-19 14:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:37.236555 | orchestrator | 2025-05-19 14:35:37 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:37.239055 | orchestrator | 2025-05-19 14:35:37 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:37.239089 | orchestrator | 2025-05-19 14:35:37 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:37.239788 | orchestrator | 2025-05-19 14:35:37 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:37.240456 | orchestrator | 2025-05-19 14:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:40.374333 | orchestrator | 2025-05-19 14:35:40 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:40.377112 | orchestrator | 2025-05-19 14:35:40 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:40.382715 | orchestrator | 2025-05-19 14:35:40 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:40.385002 | orchestrator | 2025-05-19 14:35:40 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:40.385956 | orchestrator | 2025-05-19 14:35:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:43.456123 | orchestrator | 2025-05-19 14:35:43 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:43.458544 | orchestrator | 2025-05-19 14:35:43 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:43.463842 | orchestrator | 2025-05-19 14:35:43 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:43.464949 | orchestrator | 2025-05-19 14:35:43 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:43.466124 | orchestrator | 2025-05-19 14:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:46.530389 | orchestrator | 2025-05-19 14:35:46 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:46.530978 | orchestrator | 2025-05-19 14:35:46 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:46.531981 | orchestrator | 2025-05-19 14:35:46 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:46.532842 | orchestrator | 2025-05-19 14:35:46 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:46.533704 | orchestrator | 2025-05-19 14:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:49.584701 | orchestrator | 2025-05-19 14:35:49 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:49.587791 | orchestrator | 2025-05-19 14:35:49 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:49.589974 | orchestrator | 2025-05-19 14:35:49 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:49.593179 | orchestrator | 2025-05-19 14:35:49 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:49.593474 | orchestrator | 2025-05-19 14:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:52.639234 | orchestrator | 2025-05-19 14:35:52 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:52.640290 | orchestrator | 2025-05-19 14:35:52 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:52.642155 | orchestrator | 2025-05-19 14:35:52 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:52.643124 | orchestrator | 2025-05-19 14:35:52 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:52.643433 | orchestrator | 2025-05-19 14:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:55.700310 | orchestrator | 2025-05-19 14:35:55 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:55.704690 | orchestrator | 2025-05-19 14:35:55 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:55.704742 | orchestrator | 2025-05-19 14:35:55 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:55.705641 | orchestrator | 2025-05-19 14:35:55 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:55.705666 | orchestrator | 2025-05-19 14:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:35:58.772246 | orchestrator | 2025-05-19 14:35:58 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:35:58.777529 | orchestrator | 2025-05-19 14:35:58 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:35:58.780012 | orchestrator | 2025-05-19 14:35:58 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:35:58.781199 | orchestrator | 2025-05-19 14:35:58 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:35:58.781224 | orchestrator | 2025-05-19 14:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:01.841623 | orchestrator | 2025-05-19 14:36:01 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:01.842432 | orchestrator | 2025-05-19 14:36:01 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:01.842982 | orchestrator | 2025-05-19 14:36:01 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state STARTED 2025-05-19 14:36:01.844609 | orchestrator | 2025-05-19 14:36:01 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:01.844789 | orchestrator | 2025-05-19 14:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:04.894960 | orchestrator | 2025-05-19 14:36:04 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:04.896771 | orchestrator | 2025-05-19 14:36:04 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:04.896934 | orchestrator | 2025-05-19 14:36:04 | INFO  | Task a04e863d-b41e-46b7-b5f5-bf4afe4e2f48 is in state SUCCESS 2025-05-19 14:36:04.898703 | orchestrator | 2025-05-19 14:36:04 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:04.898846 | orchestrator | 2025-05-19 14:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:07.946719 | orchestrator | 2025-05-19 14:36:07 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:07.947757 | orchestrator | 2025-05-19 14:36:07 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:07.949604 | orchestrator | 2025-05-19 14:36:07 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:07.949629 | orchestrator | 2025-05-19 14:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:11.013950 | orchestrator | 2025-05-19 14:36:11 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:11.016011 | orchestrator | 2025-05-19 14:36:11 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:11.018742 | orchestrator | 2025-05-19 14:36:11 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:11.018767 | orchestrator | 2025-05-19 14:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:14.061236 | orchestrator | 2025-05-19 14:36:14 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:14.062817 | orchestrator | 2025-05-19 14:36:14 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:14.066260 | orchestrator | 2025-05-19 14:36:14 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:14.066311 | orchestrator | 2025-05-19 14:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:17.122640 | orchestrator | 2025-05-19 14:36:17 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:17.123836 | orchestrator | 2025-05-19 14:36:17 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:17.125512 | orchestrator | 2025-05-19 14:36:17 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:17.126003 | orchestrator | 2025-05-19 14:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:20.180930 | orchestrator | 2025-05-19 14:36:20 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:20.182139 | orchestrator | 2025-05-19 14:36:20 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:20.183686 | orchestrator | 2025-05-19 14:36:20 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:20.183818 | orchestrator | 2025-05-19 14:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:23.226733 | orchestrator | 2025-05-19 14:36:23 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:23.227510 | orchestrator | 2025-05-19 14:36:23 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:23.228808 | orchestrator | 2025-05-19 14:36:23 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:23.231434 | orchestrator | 2025-05-19 14:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:26.287824 | orchestrator | 2025-05-19 14:36:26 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:26.288067 | orchestrator | 2025-05-19 14:36:26 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:26.291209 | orchestrator | 2025-05-19 14:36:26 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:26.291307 | orchestrator | 2025-05-19 14:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:29.330132 | orchestrator | 2025-05-19 14:36:29 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:29.331926 | orchestrator | 2025-05-19 14:36:29 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:29.333047 | orchestrator | 2025-05-19 14:36:29 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:29.333077 | orchestrator | 2025-05-19 14:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:32.373301 | orchestrator | 2025-05-19 14:36:32 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:32.375583 | orchestrator | 2025-05-19 14:36:32 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:32.378566 | orchestrator | 2025-05-19 14:36:32 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:32.378649 | orchestrator | 2025-05-19 14:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:35.420330 | orchestrator | 2025-05-19 14:36:35 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:35.420511 | orchestrator | 2025-05-19 14:36:35 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:35.420967 | orchestrator | 2025-05-19 14:36:35 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:35.421065 | orchestrator | 2025-05-19 14:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:38.467842 | orchestrator | 2025-05-19 14:36:38 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:38.468400 | orchestrator | 2025-05-19 14:36:38 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:38.469449 | orchestrator | 2025-05-19 14:36:38 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:38.470162 | orchestrator | 2025-05-19 14:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:41.517998 | orchestrator | 2025-05-19 14:36:41 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:41.520461 | orchestrator | 2025-05-19 14:36:41 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:41.523668 | orchestrator | 2025-05-19 14:36:41 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:41.524171 | orchestrator | 2025-05-19 14:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:44.572018 | orchestrator | 2025-05-19 14:36:44 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:44.573109 | orchestrator | 2025-05-19 14:36:44 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:44.575913 | orchestrator | 2025-05-19 14:36:44 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:44.576506 | orchestrator | 2025-05-19 14:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:47.636695 | orchestrator | 2025-05-19 14:36:47 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:47.637317 | orchestrator | 2025-05-19 14:36:47 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:47.641077 | orchestrator | 2025-05-19 14:36:47 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:47.641155 | orchestrator | 2025-05-19 14:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:50.694352 | orchestrator | 2025-05-19 14:36:50 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:50.696096 | orchestrator | 2025-05-19 14:36:50 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:50.700177 | orchestrator | 2025-05-19 14:36:50 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:50.700501 | orchestrator | 2025-05-19 14:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:53.756110 | orchestrator | 2025-05-19 14:36:53 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:53.761226 | orchestrator | 2025-05-19 14:36:53 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:53.763682 | orchestrator | 2025-05-19 14:36:53 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state STARTED 2025-05-19 14:36:53.763716 | orchestrator | 2025-05-19 14:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:56.835961 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:36:56.836149 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:56.837332 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:56.838629 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state STARTED 2025-05-19 14:36:56.844320 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task 15ebb642-4485-4903-8cd5-a0f04cb592d7 is in state SUCCESS 2025-05-19 14:36:56.846596 | orchestrator | 2025-05-19 14:36:56.846642 | orchestrator | 2025-05-19 14:36:56.846655 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-19 14:36:56.846668 | orchestrator | 2025-05-19 14:36:56.846681 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-19 14:36:56.846692 | orchestrator | Monday 19 May 2025 14:34:46 +0000 (0:00:00.234) 0:00:00.234 ************ 2025-05-19 14:36:56.846704 | orchestrator | ok: [testbed-manager] 2025-05-19 14:36:56.846717 | orchestrator | 2025-05-19 14:36:56.846742 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-19 14:36:56.846754 | orchestrator | Monday 19 May 2025 14:34:46 +0000 (0:00:00.862) 0:00:01.097 ************ 2025-05-19 14:36:56.846766 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-19 14:36:56.846778 | orchestrator | 2025-05-19 14:36:56.846789 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-19 14:36:56.846801 | orchestrator | Monday 19 May 2025 14:34:47 +0000 (0:00:00.609) 0:00:01.707 ************ 2025-05-19 14:36:56.846812 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.846823 | orchestrator | 2025-05-19 14:36:56.846835 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-19 14:36:56.846846 | orchestrator | Monday 19 May 2025 14:34:49 +0000 (0:00:01.460) 0:00:03.167 ************ 2025-05-19 14:36:56.846863 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-19 14:36:56.846875 | orchestrator | ok: [testbed-manager] 2025-05-19 14:36:56.846905 | orchestrator | 2025-05-19 14:36:56.846917 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-19 14:36:56.846928 | orchestrator | Monday 19 May 2025 14:35:59 +0000 (0:01:10.449) 0:01:13.616 ************ 2025-05-19 14:36:56.846939 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.846957 | orchestrator | 2025-05-19 14:36:56.846969 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:36:56.846981 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:36:56.846995 | orchestrator | 2025-05-19 14:36:56.847006 | orchestrator | 2025-05-19 14:36:56.847017 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:36:56.847029 | orchestrator | Monday 19 May 2025 14:36:03 +0000 (0:00:03.515) 0:01:17.132 ************ 2025-05-19 14:36:56.847040 | orchestrator | =============================================================================== 2025-05-19 14:36:56.847051 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 70.45s 2025-05-19 14:36:56.847062 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.52s 2025-05-19 14:36:56.847073 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.46s 2025-05-19 14:36:56.847085 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.86s 2025-05-19 14:36:56.847096 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.61s 2025-05-19 14:36:56.847107 | orchestrator | 2025-05-19 14:36:56.847118 | orchestrator | 2025-05-19 14:36:56.847129 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-19 14:36:56.847140 | orchestrator | 2025-05-19 14:36:56.847151 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 14:36:56.847162 | orchestrator | Monday 19 May 2025 14:34:17 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-19 14:36:56.847174 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:36:56.847186 | orchestrator | 2025-05-19 14:36:56.847197 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-19 14:36:56.847209 | orchestrator | Monday 19 May 2025 14:34:18 +0000 (0:00:01.362) 0:00:01.624 ************ 2025-05-19 14:36:56.847220 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847231 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847242 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847253 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847265 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847276 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847287 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847298 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847309 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847320 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847331 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847344 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847391 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847404 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 14:36:56.847422 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847434 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847457 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847469 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847480 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 14:36:56.847491 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847502 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 14:36:56.847513 | orchestrator | 2025-05-19 14:36:56.847524 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 14:36:56.847535 | orchestrator | Monday 19 May 2025 14:34:22 +0000 (0:00:04.043) 0:00:05.667 ************ 2025-05-19 14:36:56.847546 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:36:56.847558 | orchestrator | 2025-05-19 14:36:56.847569 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-19 14:36:56.847585 | orchestrator | Monday 19 May 2025 14:34:24 +0000 (0:00:01.269) 0:00:06.937 ************ 2025-05-19 14:36:56.847601 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847652 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.847747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847759 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.847913 | orchestrator | 2025-05-19 14:36:56.847924 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-19 14:36:56.847935 | orchestrator | Monday 19 May 2025 14:34:29 +0000 (0:00:04.861) 0:00:11.798 ************ 2025-05-19 14:36:56.847959 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.847978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.847994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848006 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:36:56.848018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848058 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:36:56.848069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848117 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:36:56.848128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848173 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:36:56.848185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848270 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:36:56.848281 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:36:56.848292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848333 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:36:56.848344 | orchestrator | 2025-05-19 14:36:56.848355 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-19 14:36:56.848383 | orchestrator | Monday 19 May 2025 14:34:30 +0000 (0:00:01.537) 0:00:13.336 ************ 2025-05-19 14:36:56.848395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848406 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848424 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848436 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:36:56.848447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.848492 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:36:56.848503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.848527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:36:56.849514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.849539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.849573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849604 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:36:56.849616 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:36:56.849627 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:36:56.849638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 14:36:56.849653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.849682 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:36:56.849693 | orchestrator | 2025-05-19 14:36:56.849704 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-19 14:36:56.849715 | orchestrator | Monday 19 May 2025 14:34:32 +0000 (0:00:02.226) 0:00:15.563 ************ 2025-05-19 14:36:56.849726 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:36:56.849737 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:36:56.849748 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:36:56.849758 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:36:56.849769 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:36:56.849779 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:36:56.849790 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:36:56.849801 | orchestrator | 2025-05-19 14:36:56.849811 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-19 14:36:56.849822 | orchestrator | Monday 19 May 2025 14:34:33 +0000 (0:00:01.090) 0:00:16.654 ************ 2025-05-19 14:36:56.849833 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:36:56.849843 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:36:56.849854 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:36:56.849864 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:36:56.849875 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:36:56.849885 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:36:56.849895 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:36:56.849906 | orchestrator | 2025-05-19 14:36:56.849917 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-19 14:36:56.849927 | orchestrator | Monday 19 May 2025 14:34:34 +0000 (0:00:00.772) 0:00:17.426 ************ 2025-05-19 14:36:56.849938 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.849950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.849968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.849980 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.850060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.850077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850091 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.850137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.850169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.850293 | orchestrator | 2025-05-19 14:36:56.850304 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-19 14:36:56.850315 | orchestrator | Monday 19 May 2025 14:34:40 +0000 (0:00:05.807) 0:00:23.234 ************ 2025-05-19 14:36:56.850326 | orchestrator | [WARNING]: Skipped 2025-05-19 14:36:56.850337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-19 14:36:56.850348 | orchestrator | to this access issue: 2025-05-19 14:36:56.850389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-19 14:36:56.850401 | orchestrator | directory 2025-05-19 14:36:56.850412 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:36:56.850423 | orchestrator | 2025-05-19 14:36:56.850433 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-19 14:36:56.850455 | orchestrator | Monday 19 May 2025 14:34:42 +0000 (0:00:01.875) 0:00:25.109 ************ 2025-05-19 14:36:56.850466 | orchestrator | [WARNING]: Skipped 2025-05-19 14:36:56.850477 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-19 14:36:56.850488 | orchestrator | to this access issue: 2025-05-19 14:36:56.850499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-19 14:36:56.850510 | orchestrator | directory 2025-05-19 14:36:56.850520 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:36:56.850531 | orchestrator | 2025-05-19 14:36:56.850541 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-19 14:36:56.850552 | orchestrator | Monday 19 May 2025 14:34:43 +0000 (0:00:01.063) 0:00:26.173 ************ 2025-05-19 14:36:56.850563 | orchestrator | [WARNING]: Skipped 2025-05-19 14:36:56.850574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-19 14:36:56.850584 | orchestrator | to this access issue: 2025-05-19 14:36:56.850595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-19 14:36:56.850606 | orchestrator | directory 2025-05-19 14:36:56.850616 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:36:56.850627 | orchestrator | 2025-05-19 14:36:56.850638 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-19 14:36:56.850648 | orchestrator | Monday 19 May 2025 14:34:44 +0000 (0:00:00.915) 0:00:27.088 ************ 2025-05-19 14:36:56.850659 | orchestrator | [WARNING]: Skipped 2025-05-19 14:36:56.850670 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-19 14:36:56.850687 | orchestrator | to this access issue: 2025-05-19 14:36:56.850697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-19 14:36:56.850708 | orchestrator | directory 2025-05-19 14:36:56.850719 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:36:56.850729 | orchestrator | 2025-05-19 14:36:56.850740 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-19 14:36:56.850751 | orchestrator | Monday 19 May 2025 14:34:45 +0000 (0:00:00.752) 0:00:27.841 ************ 2025-05-19 14:36:56.850761 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.850772 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.850783 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.850793 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.850804 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.850814 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.850825 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.850836 | orchestrator | 2025-05-19 14:36:56.850847 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-19 14:36:56.850857 | orchestrator | Monday 19 May 2025 14:34:49 +0000 (0:00:04.117) 0:00:31.958 ************ 2025-05-19 14:36:56.850868 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850919 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850930 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850940 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 14:36:56.850951 | orchestrator | 2025-05-19 14:36:56.850962 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-19 14:36:56.850972 | orchestrator | Monday 19 May 2025 14:34:52 +0000 (0:00:02.804) 0:00:34.763 ************ 2025-05-19 14:36:56.850983 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.850994 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.851005 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.851015 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.851026 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.851036 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.851046 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.851057 | orchestrator | 2025-05-19 14:36:56.851068 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-19 14:36:56.851082 | orchestrator | Monday 19 May 2025 14:34:54 +0000 (0:00:02.429) 0:00:37.192 ************ 2025-05-19 14:36:56.851094 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851123 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851165 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851275 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851315 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:36:56.851348 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851375 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851387 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851398 | orchestrator | 2025-05-19 14:36:56.851409 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-19 14:36:56.851420 | orchestrator | Monday 19 May 2025 14:34:57 +0000 (0:00:02.795) 0:00:39.990 ************ 2025-05-19 14:36:56.851431 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851473 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851485 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851496 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851506 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 14:36:56.851517 | orchestrator | 2025-05-19 14:36:56.851527 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-19 14:36:56.851538 | orchestrator | Monday 19 May 2025 14:34:59 +0000 (0:00:02.619) 0:00:42.609 ************ 2025-05-19 14:36:56.851549 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851570 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851587 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851612 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851623 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 14:36:56.851634 | orchestrator | 2025-05-19 14:36:56.851644 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-19 14:36:56.851655 | orchestrator | Monday 19 May 2025 14:35:02 +0000 (0:00:02.509) 0:00:45.119 ************ 2025-05-19 14:36:56.851666 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851700 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851773 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 14:36:56.851807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:36:56.851936 | orchestrator | 2025-05-19 14:36:56.851952 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-19 14:36:56.851970 | orchestrator | Monday 19 May 2025 14:35:05 +0000 (0:00:03.530) 0:00:48.650 ************ 2025-05-19 14:36:56.851981 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.851991 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.852002 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.852013 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.852024 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.852034 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.852045 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.852055 | orchestrator | 2025-05-19 14:36:56.852066 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-19 14:36:56.852077 | orchestrator | Monday 19 May 2025 14:35:07 +0000 (0:00:01.760) 0:00:50.410 ************ 2025-05-19 14:36:56.852087 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.852098 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.852108 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.852119 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.852129 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.852140 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.852150 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.852160 | orchestrator | 2025-05-19 14:36:56.852171 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852182 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:01.595) 0:00:52.006 ************ 2025-05-19 14:36:56.852192 | orchestrator | 2025-05-19 14:36:56.852203 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852213 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.054) 0:00:52.061 ************ 2025-05-19 14:36:56.852224 | orchestrator | 2025-05-19 14:36:56.852235 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852246 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.075) 0:00:52.136 ************ 2025-05-19 14:36:56.852256 | orchestrator | 2025-05-19 14:36:56.852267 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852278 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.188) 0:00:52.325 ************ 2025-05-19 14:36:56.852288 | orchestrator | 2025-05-19 14:36:56.852299 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852310 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.070) 0:00:52.396 ************ 2025-05-19 14:36:56.852320 | orchestrator | 2025-05-19 14:36:56.852331 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852341 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.067) 0:00:52.463 ************ 2025-05-19 14:36:56.852352 | orchestrator | 2025-05-19 14:36:56.852378 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 14:36:56.852389 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.063) 0:00:52.527 ************ 2025-05-19 14:36:56.852400 | orchestrator | 2025-05-19 14:36:56.852411 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-19 14:36:56.852421 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.094) 0:00:52.621 ************ 2025-05-19 14:36:56.852432 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.852443 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.852453 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.852464 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.852474 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.852485 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.852495 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.852506 | orchestrator | 2025-05-19 14:36:56.852517 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-19 14:36:56.852527 | orchestrator | Monday 19 May 2025 14:35:54 +0000 (0:00:44.751) 0:01:37.373 ************ 2025-05-19 14:36:56.852544 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.852555 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.852565 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.852576 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.852586 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.852597 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.852607 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.852618 | orchestrator | 2025-05-19 14:36:56.852629 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-19 14:36:56.852640 | orchestrator | Monday 19 May 2025 14:36:42 +0000 (0:00:47.869) 0:02:25.243 ************ 2025-05-19 14:36:56.852651 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:36:56.852661 | orchestrator | ok: [testbed-manager] 2025-05-19 14:36:56.852672 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:36:56.852683 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:36:56.852693 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:36:56.852704 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:36:56.852714 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:36:56.852725 | orchestrator | 2025-05-19 14:36:56.852736 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-19 14:36:56.852746 | orchestrator | Monday 19 May 2025 14:36:44 +0000 (0:00:02.117) 0:02:27.360 ************ 2025-05-19 14:36:56.852757 | orchestrator | changed: [testbed-manager] 2025-05-19 14:36:56.852768 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:36:56.852779 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:36:56.852789 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:36:56.852800 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:36:56.852811 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:36:56.852821 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:36:56.852832 | orchestrator | 2025-05-19 14:36:56.852843 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:36:56.852855 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852866 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852883 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852895 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852906 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852916 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852927 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 14:36:56.852938 | orchestrator | 2025-05-19 14:36:56.852949 | orchestrator | 2025-05-19 14:36:56.852959 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:36:56.852970 | orchestrator | Monday 19 May 2025 14:36:53 +0000 (0:00:09.263) 0:02:36.623 ************ 2025-05-19 14:36:56.852989 | orchestrator | =============================================================================== 2025-05-19 14:36:56.853000 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 47.87s 2025-05-19 14:36:56.853010 | orchestrator | common : Restart fluentd container ------------------------------------- 44.75s 2025-05-19 14:36:56.853021 | orchestrator | common : Restart cron container ----------------------------------------- 9.26s 2025-05-19 14:36:56.853032 | orchestrator | common : Copying over config.json files for services -------------------- 5.81s 2025-05-19 14:36:56.853049 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.86s 2025-05-19 14:36:56.853060 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.12s 2025-05-19 14:36:56.853071 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.04s 2025-05-19 14:36:56.853081 | orchestrator | common : Check common containers ---------------------------------------- 3.53s 2025-05-19 14:36:56.853092 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.80s 2025-05-19 14:36:56.853102 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.80s 2025-05-19 14:36:56.853113 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.62s 2025-05-19 14:36:56.853124 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.51s 2025-05-19 14:36:56.853134 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.43s 2025-05-19 14:36:56.853145 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.23s 2025-05-19 14:36:56.853155 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2025-05-19 14:36:56.853166 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.88s 2025-05-19 14:36:56.853177 | orchestrator | common : Creating log volume -------------------------------------------- 1.76s 2025-05-19 14:36:56.853187 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.60s 2025-05-19 14:36:56.853198 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2025-05-19 14:36:56.853209 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2025-05-19 14:36:56.853220 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:36:56.853230 | orchestrator | 2025-05-19 14:36:56 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:36:56.853241 | orchestrator | 2025-05-19 14:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:36:59.912770 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:36:59.912889 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:36:59.912905 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:36:59.912916 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state STARTED 2025-05-19 14:36:59.912927 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:36:59.913076 | orchestrator | 2025-05-19 14:36:59 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:36:59.913095 | orchestrator | 2025-05-19 14:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:02.937523 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:02.937676 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:02.937790 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:02.938396 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state STARTED 2025-05-19 14:37:02.938856 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:02.939420 | orchestrator | 2025-05-19 14:37:02 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:02.940018 | orchestrator | 2025-05-19 14:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:05.961654 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:05.961866 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:05.962532 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:05.963642 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state STARTED 2025-05-19 14:37:05.964395 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:05.965204 | orchestrator | 2025-05-19 14:37:05 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:05.965273 | orchestrator | 2025-05-19 14:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:09.004416 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:09.004581 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:09.004959 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:09.005518 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state STARTED 2025-05-19 14:37:09.008825 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:09.012668 | orchestrator | 2025-05-19 14:37:09 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:09.012706 | orchestrator | 2025-05-19 14:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:12.043618 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:12.043711 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:12.044101 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:12.044673 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task b293d34b-da87-44b3-b894-2fc989b0579b is in state SUCCESS 2025-05-19 14:37:12.045734 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:12.046235 | orchestrator | 2025-05-19 14:37:12 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:12.046306 | orchestrator | 2025-05-19 14:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:15.075836 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:15.076415 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:15.077481 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:15.080087 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:15.081929 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:15.084863 | orchestrator | 2025-05-19 14:37:15 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:15.084926 | orchestrator | 2025-05-19 14:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:18.118682 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:18.119892 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:18.120626 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:18.121133 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:18.122905 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:18.123674 | orchestrator | 2025-05-19 14:37:18 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:18.123692 | orchestrator | 2025-05-19 14:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:21.152408 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state STARTED 2025-05-19 14:37:21.152566 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:21.153114 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:21.153757 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:21.154439 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:21.155406 | orchestrator | 2025-05-19 14:37:21 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:21.155427 | orchestrator | 2025-05-19 14:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:24.182382 | orchestrator | 2025-05-19 14:37:24.182484 | orchestrator | 2025-05-19 14:37:24.182503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:37:24.182515 | orchestrator | 2025-05-19 14:37:24.182526 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:37:24.182537 | orchestrator | Monday 19 May 2025 14:37:00 +0000 (0:00:00.240) 0:00:00.240 ************ 2025-05-19 14:37:24.182548 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:37:24.182559 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:37:24.182569 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:37:24.182580 | orchestrator | 2025-05-19 14:37:24.182591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:37:24.182602 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.250) 0:00:00.491 ************ 2025-05-19 14:37:24.182613 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-19 14:37:24.182624 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-19 14:37:24.182635 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-19 14:37:24.182645 | orchestrator | 2025-05-19 14:37:24.182656 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-19 14:37:24.182667 | orchestrator | 2025-05-19 14:37:24.182677 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-19 14:37:24.182689 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.335) 0:00:00.827 ************ 2025-05-19 14:37:24.182700 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:37:24.182711 | orchestrator | 2025-05-19 14:37:24.182722 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-19 14:37:24.182733 | orchestrator | Monday 19 May 2025 14:37:02 +0000 (0:00:00.748) 0:00:01.575 ************ 2025-05-19 14:37:24.182765 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 14:37:24.182777 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 14:37:24.182787 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 14:37:24.182798 | orchestrator | 2025-05-19 14:37:24.182808 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-19 14:37:24.182819 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:00.823) 0:00:02.399 ************ 2025-05-19 14:37:24.182829 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 14:37:24.182840 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 14:37:24.182851 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 14:37:24.182861 | orchestrator | 2025-05-19 14:37:24.182872 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-19 14:37:24.182882 | orchestrator | Monday 19 May 2025 14:37:05 +0000 (0:00:02.781) 0:00:05.180 ************ 2025-05-19 14:37:24.182893 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:37:24.182904 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:37:24.182916 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:37:24.182929 | orchestrator | 2025-05-19 14:37:24.182941 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-19 14:37:24.182953 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:02.565) 0:00:07.746 ************ 2025-05-19 14:37:24.182965 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:37:24.182977 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:37:24.182989 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:37:24.183001 | orchestrator | 2025-05-19 14:37:24.183013 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:37:24.183025 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.183038 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.183050 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.183062 | orchestrator | 2025-05-19 14:37:24.183074 | orchestrator | 2025-05-19 14:37:24.183086 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:37:24.183098 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:02.631) 0:00:10.377 ************ 2025-05-19 14:37:24.183111 | orchestrator | =============================================================================== 2025-05-19 14:37:24.183122 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.78s 2025-05-19 14:37:24.183134 | orchestrator | memcached : Restart memcached container --------------------------------- 2.63s 2025-05-19 14:37:24.183146 | orchestrator | memcached : Check memcached container ----------------------------------- 2.57s 2025-05-19 14:37:24.183158 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-05-19 14:37:24.183170 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.75s 2025-05-19 14:37:24.183181 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-05-19 14:37:24.183193 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-05-19 14:37:24.183206 | orchestrator | 2025-05-19 14:37:24.183218 | orchestrator | 2025-05-19 14:37:24.183230 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:37:24.183241 | orchestrator | 2025-05-19 14:37:24.183253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:37:24.183278 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.226) 0:00:00.226 ************ 2025-05-19 14:37:24.183289 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:37:24.183300 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:37:24.183311 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:37:24.183328 | orchestrator | 2025-05-19 14:37:24.183366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:37:24.183407 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.241) 0:00:00.468 ************ 2025-05-19 14:37:24.183428 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-19 14:37:24.183447 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-19 14:37:24.183465 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-19 14:37:24.183481 | orchestrator | 2025-05-19 14:37:24.183499 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-19 14:37:24.183517 | orchestrator | 2025-05-19 14:37:24.183536 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-19 14:37:24.183552 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.365) 0:00:00.833 ************ 2025-05-19 14:37:24.183568 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:37:24.183583 | orchestrator | 2025-05-19 14:37:24.183599 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-19 14:37:24.183616 | orchestrator | Monday 19 May 2025 14:37:02 +0000 (0:00:00.678) 0:00:01.512 ************ 2025-05-19 14:37:24.183636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183799 | orchestrator | 2025-05-19 14:37:24.183818 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-19 14:37:24.183836 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:01.512) 0:00:03.024 ************ 2025-05-19 14:37:24.183855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.183980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184001 | orchestrator | 2025-05-19 14:37:24.184021 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-19 14:37:24.184039 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:03.196) 0:00:06.220 ************ 2025-05-19 14:37:24.184059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184194 | orchestrator | 2025-05-19 14:37:24.184222 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-19 14:37:24.184243 | orchestrator | Monday 19 May 2025 14:37:10 +0000 (0:00:02.976) 0:00:09.196 ************ 2025-05-19 14:37:24.184263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 14:37:24.184417 | orchestrator | 2025-05-19 14:37:24.184443 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 14:37:24.184460 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:01.563) 0:00:10.760 ************ 2025-05-19 14:37:24.184478 | orchestrator | 2025-05-19 14:37:24.184496 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 14:37:24.184526 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:00.066) 0:00:10.827 ************ 2025-05-19 14:37:24.184545 | orchestrator | 2025-05-19 14:37:24.184564 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 14:37:24.184584 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:00.068) 0:00:10.896 ************ 2025-05-19 14:37:24.184601 | orchestrator | 2025-05-19 14:37:24.184620 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-19 14:37:24.184639 | orchestrator | Monday 19 May 2025 14:37:12 +0000 (0:00:00.072) 0:00:10.968 ************ 2025-05-19 14:37:24.184657 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:37:24.184674 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:37:24.184693 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:37:24.184711 | orchestrator | 2025-05-19 14:37:24.184729 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-19 14:37:24.184747 | orchestrator | Monday 19 May 2025 14:37:15 +0000 (0:00:03.652) 0:00:14.620 ************ 2025-05-19 14:37:24.184766 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:37:24.184783 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:37:24.184801 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:37:24.184818 | orchestrator | 2025-05-19 14:37:24.184836 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:37:24.184854 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.184873 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.184892 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:37:24.184910 | orchestrator | 2025-05-19 14:37:24.184928 | orchestrator | 2025-05-19 14:37:24.184946 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:37:24.184964 | orchestrator | Monday 19 May 2025 14:37:23 +0000 (0:00:08.286) 0:00:22.907 ************ 2025-05-19 14:37:24.184983 | orchestrator | =============================================================================== 2025-05-19 14:37:24.185001 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.29s 2025-05-19 14:37:24.185020 | orchestrator | redis : Restart redis container ----------------------------------------- 3.65s 2025-05-19 14:37:24.185053 | orchestrator | redis : Copying over default config.json files -------------------------- 3.20s 2025-05-19 14:37:24.185072 | orchestrator | redis : Copying over redis config files --------------------------------- 2.98s 2025-05-19 14:37:24.185090 | orchestrator | redis : Check redis containers ------------------------------------------ 1.56s 2025-05-19 14:37:24.185108 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.51s 2025-05-19 14:37:24.185126 | orchestrator | redis : include_tasks --------------------------------------------------- 0.68s 2025-05-19 14:37:24.185145 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2025-05-19 14:37:24.185164 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-05-19 14:37:24.185183 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-05-19 14:37:24.185424 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task ffbca48a-1016-41a9-af8c-8fd762d5ad33 is in state SUCCESS 2025-05-19 14:37:24.185457 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:24.185476 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:24.185496 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:24.185515 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:24.185535 | orchestrator | 2025-05-19 14:37:24 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:24.185554 | orchestrator | 2025-05-19 14:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:27.213224 | orchestrator | 2025-05-19 14:37:27 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:27.213321 | orchestrator | 2025-05-19 14:37:27 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:27.214073 | orchestrator | 2025-05-19 14:37:27 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:27.215026 | orchestrator | 2025-05-19 14:37:27 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:27.216059 | orchestrator | 2025-05-19 14:37:27 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:27.216092 | orchestrator | 2025-05-19 14:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:30.278691 | orchestrator | 2025-05-19 14:37:30 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:30.278800 | orchestrator | 2025-05-19 14:37:30 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:30.279544 | orchestrator | 2025-05-19 14:37:30 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:30.281871 | orchestrator | 2025-05-19 14:37:30 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:30.282927 | orchestrator | 2025-05-19 14:37:30 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:30.282947 | orchestrator | 2025-05-19 14:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:33.327394 | orchestrator | 2025-05-19 14:37:33 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:33.327741 | orchestrator | 2025-05-19 14:37:33 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:33.329103 | orchestrator | 2025-05-19 14:37:33 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:33.330545 | orchestrator | 2025-05-19 14:37:33 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:33.334607 | orchestrator | 2025-05-19 14:37:33 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:33.334650 | orchestrator | 2025-05-19 14:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:36.379225 | orchestrator | 2025-05-19 14:37:36 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:36.380535 | orchestrator | 2025-05-19 14:37:36 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:36.384384 | orchestrator | 2025-05-19 14:37:36 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:36.386088 | orchestrator | 2025-05-19 14:37:36 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:36.386772 | orchestrator | 2025-05-19 14:37:36 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:36.387117 | orchestrator | 2025-05-19 14:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:39.417660 | orchestrator | 2025-05-19 14:37:39 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:39.418255 | orchestrator | 2025-05-19 14:37:39 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:39.418905 | orchestrator | 2025-05-19 14:37:39 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:39.420638 | orchestrator | 2025-05-19 14:37:39 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:39.421405 | orchestrator | 2025-05-19 14:37:39 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:39.421554 | orchestrator | 2025-05-19 14:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:42.457186 | orchestrator | 2025-05-19 14:37:42 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:42.458682 | orchestrator | 2025-05-19 14:37:42 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:42.459568 | orchestrator | 2025-05-19 14:37:42 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:42.462191 | orchestrator | 2025-05-19 14:37:42 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:42.462514 | orchestrator | 2025-05-19 14:37:42 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:42.462768 | orchestrator | 2025-05-19 14:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:45.494383 | orchestrator | 2025-05-19 14:37:45 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:45.497806 | orchestrator | 2025-05-19 14:37:45 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:45.498779 | orchestrator | 2025-05-19 14:37:45 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:45.499819 | orchestrator | 2025-05-19 14:37:45 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:45.500712 | orchestrator | 2025-05-19 14:37:45 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:45.500824 | orchestrator | 2025-05-19 14:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:48.546481 | orchestrator | 2025-05-19 14:37:48 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:48.546795 | orchestrator | 2025-05-19 14:37:48 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:48.548598 | orchestrator | 2025-05-19 14:37:48 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:48.550341 | orchestrator | 2025-05-19 14:37:48 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:48.553600 | orchestrator | 2025-05-19 14:37:48 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:48.553922 | orchestrator | 2025-05-19 14:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:51.623578 | orchestrator | 2025-05-19 14:37:51 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:51.630788 | orchestrator | 2025-05-19 14:37:51 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:51.631556 | orchestrator | 2025-05-19 14:37:51 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:51.632459 | orchestrator | 2025-05-19 14:37:51 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:51.633100 | orchestrator | 2025-05-19 14:37:51 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:51.633242 | orchestrator | 2025-05-19 14:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:54.667420 | orchestrator | 2025-05-19 14:37:54 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:54.667586 | orchestrator | 2025-05-19 14:37:54 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:54.668015 | orchestrator | 2025-05-19 14:37:54 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:54.668555 | orchestrator | 2025-05-19 14:37:54 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:54.670336 | orchestrator | 2025-05-19 14:37:54 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:54.670385 | orchestrator | 2025-05-19 14:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:37:57.716577 | orchestrator | 2025-05-19 14:37:57 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:37:57.716660 | orchestrator | 2025-05-19 14:37:57 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:37:57.717167 | orchestrator | 2025-05-19 14:37:57 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:37:57.718704 | orchestrator | 2025-05-19 14:37:57 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state STARTED 2025-05-19 14:37:57.719988 | orchestrator | 2025-05-19 14:37:57 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:37:57.720268 | orchestrator | 2025-05-19 14:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:00.756081 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:00.757232 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:00.758545 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:00.760197 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:00.761376 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task 127813c2-f6ab-487b-ad05-616e76b8bb38 is in state SUCCESS 2025-05-19 14:38:00.763654 | orchestrator | 2025-05-19 14:38:00.763740 | orchestrator | 2025-05-19 14:38:00.763755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:38:00.763768 | orchestrator | 2025-05-19 14:38:00.763779 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:38:00.763790 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.361) 0:00:00.361 ************ 2025-05-19 14:38:00.763801 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:00.763813 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:00.763824 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:00.763834 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:00.763845 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:00.763855 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:00.763866 | orchestrator | 2025-05-19 14:38:00.763877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:38:00.763896 | orchestrator | Monday 19 May 2025 14:37:02 +0000 (0:00:01.219) 0:00:01.580 ************ 2025-05-19 14:38:00.763908 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763919 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763930 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763941 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763952 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763963 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 14:38:00.763973 | orchestrator | 2025-05-19 14:38:00.763984 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-19 14:38:00.763995 | orchestrator | 2025-05-19 14:38:00.764006 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-19 14:38:00.764017 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:01.300) 0:00:02.881 ************ 2025-05-19 14:38:00.764028 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:38:00.764040 | orchestrator | 2025-05-19 14:38:00.764051 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 14:38:00.764062 | orchestrator | Monday 19 May 2025 14:37:06 +0000 (0:00:02.187) 0:00:05.068 ************ 2025-05-19 14:38:00.764073 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 14:38:00.764085 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 14:38:00.764096 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 14:38:00.764107 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 14:38:00.764118 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 14:38:00.764129 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 14:38:00.764140 | orchestrator | 2025-05-19 14:38:00.764150 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 14:38:00.764161 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:01.289) 0:00:06.358 ************ 2025-05-19 14:38:00.764172 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 14:38:00.764185 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 14:38:00.764198 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 14:38:00.764210 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 14:38:00.764222 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 14:38:00.764234 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 14:38:00.764246 | orchestrator | 2025-05-19 14:38:00.764258 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 14:38:00.764271 | orchestrator | Monday 19 May 2025 14:37:09 +0000 (0:00:01.994) 0:00:08.353 ************ 2025-05-19 14:38:00.764362 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-19 14:38:00.764378 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:00.764391 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-19 14:38:00.764402 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-19 14:38:00.764413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:00.764424 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-19 14:38:00.764434 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:00.764445 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-19 14:38:00.764456 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:00.764466 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:00.764477 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-19 14:38:00.764488 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:00.764498 | orchestrator | 2025-05-19 14:38:00.764509 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-19 14:38:00.764520 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:01.393) 0:00:09.746 ************ 2025-05-19 14:38:00.764531 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:00.764542 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:00.764552 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:00.764563 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:00.764573 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:00.764583 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:00.764593 | orchestrator | 2025-05-19 14:38:00.764602 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-19 14:38:00.764612 | orchestrator | Monday 19 May 2025 14:37:11 +0000 (0:00:00.571) 0:00:10.318 ************ 2025-05-19 14:38:00.764640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764804 | orchestrator | 2025-05-19 14:38:00.764814 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-19 14:38:00.764824 | orchestrator | Monday 19 May 2025 14:37:13 +0000 (0:00:01.868) 0:00:12.186 ************ 2025-05-19 14:38:00.764838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.764994 | orchestrator | 2025-05-19 14:38:00.765004 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-19 14:38:00.765014 | orchestrator | Monday 19 May 2025 14:37:16 +0000 (0:00:02.734) 0:00:14.920 ************ 2025-05-19 14:38:00.765027 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:00.765037 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:00.765047 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:00.765056 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:00.765065 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:00.765075 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:00.765084 | orchestrator | 2025-05-19 14:38:00.765094 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-19 14:38:00.765109 | orchestrator | Monday 19 May 2025 14:37:17 +0000 (0:00:00.737) 0:00:15.658 ************ 2025-05-19 14:38:00.765119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 14:38:00.765274 | orchestrator | 2025-05-19 14:38:00.765284 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765309 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:02.150) 0:00:17.809 ************ 2025-05-19 14:38:00.765319 | orchestrator | 2025-05-19 14:38:00.765328 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765338 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.119) 0:00:17.928 ************ 2025-05-19 14:38:00.765348 | orchestrator | 2025-05-19 14:38:00.765357 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765367 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.121) 0:00:18.050 ************ 2025-05-19 14:38:00.765376 | orchestrator | 2025-05-19 14:38:00.765386 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765395 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.138) 0:00:18.188 ************ 2025-05-19 14:38:00.765405 | orchestrator | 2025-05-19 14:38:00.765414 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765424 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.127) 0:00:18.316 ************ 2025-05-19 14:38:00.765434 | orchestrator | 2025-05-19 14:38:00.765443 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 14:38:00.765453 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.117) 0:00:18.433 ************ 2025-05-19 14:38:00.765462 | orchestrator | 2025-05-19 14:38:00.765472 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-19 14:38:00.765481 | orchestrator | Monday 19 May 2025 14:37:20 +0000 (0:00:00.226) 0:00:18.659 ************ 2025-05-19 14:38:00.765491 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:00.765501 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:00.765510 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:00.765520 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:00.765529 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:00.765539 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:00.765548 | orchestrator | 2025-05-19 14:38:00.765558 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-19 14:38:00.765567 | orchestrator | Monday 19 May 2025 14:37:26 +0000 (0:00:06.828) 0:00:25.488 ************ 2025-05-19 14:38:00.765577 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:00.765587 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:00.765596 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:00.765606 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:00.765615 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:00.765625 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:00.765634 | orchestrator | 2025-05-19 14:38:00.765644 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 14:38:00.765654 | orchestrator | Monday 19 May 2025 14:37:28 +0000 (0:00:01.288) 0:00:26.777 ************ 2025-05-19 14:38:00.765663 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:00.765673 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:00.765682 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:00.765692 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:00.765701 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:00.765711 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:00.765726 | orchestrator | 2025-05-19 14:38:00.765736 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-19 14:38:00.765746 | orchestrator | Monday 19 May 2025 14:37:37 +0000 (0:00:09.339) 0:00:36.117 ************ 2025-05-19 14:38:00.765755 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-19 14:38:00.765765 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-19 14:38:00.765775 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-19 14:38:00.765785 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-19 14:38:00.765794 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-19 14:38:00.765809 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-19 14:38:00.765820 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-19 14:38:00.765829 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-19 14:38:00.765839 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-19 14:38:00.765849 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-19 14:38:00.765862 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-19 14:38:00.765872 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-19 14:38:00.765881 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765891 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765900 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765910 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765919 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765929 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 14:38:00.765938 | orchestrator | 2025-05-19 14:38:00.765948 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-19 14:38:00.765958 | orchestrator | Monday 19 May 2025 14:37:44 +0000 (0:00:07.248) 0:00:43.365 ************ 2025-05-19 14:38:00.765967 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-19 14:38:00.765977 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:00.765987 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-19 14:38:00.765996 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:00.766006 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-19 14:38:00.766071 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:00.766085 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-19 14:38:00.766095 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-19 14:38:00.766105 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-19 14:38:00.766115 | orchestrator | 2025-05-19 14:38:00.766125 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-19 14:38:00.766134 | orchestrator | Monday 19 May 2025 14:37:47 +0000 (0:00:02.376) 0:00:45.742 ************ 2025-05-19 14:38:00.766151 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-19 14:38:00.766161 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:00.766171 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-19 14:38:00.766180 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:00.766190 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-19 14:38:00.766200 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:00.766209 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-19 14:38:00.766219 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-19 14:38:00.766229 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-19 14:38:00.766238 | orchestrator | 2025-05-19 14:38:00.766248 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 14:38:00.766258 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:03.478) 0:00:49.221 ************ 2025-05-19 14:38:00.766267 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:00.766277 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:00.766286 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:00.766319 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:00.766329 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:00.766339 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:00.766348 | orchestrator | 2025-05-19 14:38:00.766358 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:38:00.766368 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:38:00.766378 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:38:00.766388 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:38:00.766397 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:38:00.766407 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:38:00.766423 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:38:00.766433 | orchestrator | 2025-05-19 14:38:00.766443 | orchestrator | 2025-05-19 14:38:00.766453 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:38:00.766463 | orchestrator | Monday 19 May 2025 14:37:58 +0000 (0:00:08.220) 0:00:57.442 ************ 2025-05-19 14:38:00.766473 | orchestrator | =============================================================================== 2025-05-19 14:38:00.766482 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.56s 2025-05-19 14:38:00.766492 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.25s 2025-05-19 14:38:00.766506 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.83s 2025-05-19 14:38:00.766515 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.48s 2025-05-19 14:38:00.766525 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.73s 2025-05-19 14:38:00.766534 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.38s 2025-05-19 14:38:00.766544 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.19s 2025-05-19 14:38:00.766554 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.15s 2025-05-19 14:38:00.766563 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.99s 2025-05-19 14:38:00.766578 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.87s 2025-05-19 14:38:00.766588 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.39s 2025-05-19 14:38:00.766597 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.30s 2025-05-19 14:38:00.766607 | orchestrator | module-load : Load modules ---------------------------------------------- 1.29s 2025-05-19 14:38:00.766616 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.29s 2025-05-19 14:38:00.766626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.22s 2025-05-19 14:38:00.766635 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.85s 2025-05-19 14:38:00.766645 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.74s 2025-05-19 14:38:00.766654 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.57s 2025-05-19 14:38:00.766664 | orchestrator | 2025-05-19 14:38:00 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:00.766674 | orchestrator | 2025-05-19 14:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:03.798575 | orchestrator | 2025-05-19 14:38:03 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:03.799959 | orchestrator | 2025-05-19 14:38:03 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:03.800458 | orchestrator | 2025-05-19 14:38:03 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:03.801043 | orchestrator | 2025-05-19 14:38:03 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:03.801720 | orchestrator | 2025-05-19 14:38:03 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:03.801755 | orchestrator | 2025-05-19 14:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:06.842277 | orchestrator | 2025-05-19 14:38:06 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:06.842653 | orchestrator | 2025-05-19 14:38:06 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:06.843616 | orchestrator | 2025-05-19 14:38:06 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:06.844689 | orchestrator | 2025-05-19 14:38:06 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:06.846508 | orchestrator | 2025-05-19 14:38:06 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:06.846540 | orchestrator | 2025-05-19 14:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:09.878244 | orchestrator | 2025-05-19 14:38:09 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:09.878832 | orchestrator | 2025-05-19 14:38:09 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:09.879476 | orchestrator | 2025-05-19 14:38:09 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:09.880170 | orchestrator | 2025-05-19 14:38:09 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:09.881081 | orchestrator | 2025-05-19 14:38:09 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:09.881113 | orchestrator | 2025-05-19 14:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:12.910379 | orchestrator | 2025-05-19 14:38:12 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:12.912043 | orchestrator | 2025-05-19 14:38:12 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:12.912733 | orchestrator | 2025-05-19 14:38:12 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:12.913615 | orchestrator | 2025-05-19 14:38:12 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:12.914669 | orchestrator | 2025-05-19 14:38:12 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:12.914866 | orchestrator | 2025-05-19 14:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:15.960915 | orchestrator | 2025-05-19 14:38:15 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:15.961020 | orchestrator | 2025-05-19 14:38:15 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:15.963417 | orchestrator | 2025-05-19 14:38:15 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:15.963808 | orchestrator | 2025-05-19 14:38:15 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:15.964475 | orchestrator | 2025-05-19 14:38:15 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:15.964502 | orchestrator | 2025-05-19 14:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:19.000650 | orchestrator | 2025-05-19 14:38:19 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:19.003113 | orchestrator | 2025-05-19 14:38:19 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:19.005498 | orchestrator | 2025-05-19 14:38:19 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:19.007603 | orchestrator | 2025-05-19 14:38:19 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:19.009989 | orchestrator | 2025-05-19 14:38:19 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:19.010615 | orchestrator | 2025-05-19 14:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:22.052597 | orchestrator | 2025-05-19 14:38:22 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:22.052701 | orchestrator | 2025-05-19 14:38:22 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:22.052716 | orchestrator | 2025-05-19 14:38:22 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:22.054357 | orchestrator | 2025-05-19 14:38:22 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:22.054397 | orchestrator | 2025-05-19 14:38:22 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:22.054409 | orchestrator | 2025-05-19 14:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:25.093478 | orchestrator | 2025-05-19 14:38:25 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:25.095101 | orchestrator | 2025-05-19 14:38:25 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:25.095140 | orchestrator | 2025-05-19 14:38:25 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:25.095152 | orchestrator | 2025-05-19 14:38:25 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:25.096179 | orchestrator | 2025-05-19 14:38:25 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:25.096203 | orchestrator | 2025-05-19 14:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:28.148345 | orchestrator | 2025-05-19 14:38:28 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:28.149469 | orchestrator | 2025-05-19 14:38:28 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:28.150362 | orchestrator | 2025-05-19 14:38:28 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:28.151280 | orchestrator | 2025-05-19 14:38:28 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:28.154682 | orchestrator | 2025-05-19 14:38:28 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:28.154711 | orchestrator | 2025-05-19 14:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:31.192802 | orchestrator | 2025-05-19 14:38:31 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:31.193550 | orchestrator | 2025-05-19 14:38:31 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:31.194070 | orchestrator | 2025-05-19 14:38:31 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:31.195375 | orchestrator | 2025-05-19 14:38:31 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:31.197197 | orchestrator | 2025-05-19 14:38:31 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:31.197235 | orchestrator | 2025-05-19 14:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:34.240931 | orchestrator | 2025-05-19 14:38:34 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:34.241025 | orchestrator | 2025-05-19 14:38:34 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:34.241479 | orchestrator | 2025-05-19 14:38:34 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:34.242433 | orchestrator | 2025-05-19 14:38:34 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:34.243292 | orchestrator | 2025-05-19 14:38:34 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:34.243315 | orchestrator | 2025-05-19 14:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:37.271380 | orchestrator | 2025-05-19 14:38:37 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:37.274728 | orchestrator | 2025-05-19 14:38:37 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:37.275624 | orchestrator | 2025-05-19 14:38:37 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:37.276116 | orchestrator | 2025-05-19 14:38:37 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:37.276954 | orchestrator | 2025-05-19 14:38:37 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:37.277078 | orchestrator | 2025-05-19 14:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:40.330692 | orchestrator | 2025-05-19 14:38:40 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:40.332383 | orchestrator | 2025-05-19 14:38:40 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:40.336461 | orchestrator | 2025-05-19 14:38:40 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:40.337466 | orchestrator | 2025-05-19 14:38:40 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:40.339476 | orchestrator | 2025-05-19 14:38:40 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:40.339526 | orchestrator | 2025-05-19 14:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:43.394900 | orchestrator | 2025-05-19 14:38:43 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:43.399219 | orchestrator | 2025-05-19 14:38:43 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:43.399310 | orchestrator | 2025-05-19 14:38:43 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:43.399671 | orchestrator | 2025-05-19 14:38:43 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:43.401099 | orchestrator | 2025-05-19 14:38:43 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:43.401121 | orchestrator | 2025-05-19 14:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:46.450340 | orchestrator | 2025-05-19 14:38:46 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:46.452046 | orchestrator | 2025-05-19 14:38:46 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:46.453718 | orchestrator | 2025-05-19 14:38:46 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state STARTED 2025-05-19 14:38:46.454607 | orchestrator | 2025-05-19 14:38:46 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:46.455696 | orchestrator | 2025-05-19 14:38:46 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:46.455725 | orchestrator | 2025-05-19 14:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:49.482529 | orchestrator | 2025-05-19 14:38:49 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:49.483594 | orchestrator | 2025-05-19 14:38:49 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:49.485135 | orchestrator | 2025-05-19 14:38:49.485161 | orchestrator | 2025-05-19 14:38:49 | INFO  | Task d084fe32-30ea-48b5-915d-4db37c194d62 is in state SUCCESS 2025-05-19 14:38:49.486071 | orchestrator | 2025-05-19 14:38:49 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:49.487280 | orchestrator | 2025-05-19 14:38:49.487312 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-19 14:38:49.487326 | orchestrator | 2025-05-19 14:38:49.487339 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-19 14:38:49.487350 | orchestrator | Monday 19 May 2025 14:34:18 +0000 (0:00:00.209) 0:00:00.209 ************ 2025-05-19 14:38:49.487361 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.487373 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.487384 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.487394 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.487404 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.487415 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.487425 | orchestrator | 2025-05-19 14:38:49.487436 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-19 14:38:49.487446 | orchestrator | Monday 19 May 2025 14:34:19 +0000 (0:00:00.865) 0:00:01.075 ************ 2025-05-19 14:38:49.487457 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.487468 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.487479 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.487489 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.487499 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.487511 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.487521 | orchestrator | 2025-05-19 14:38:49.487555 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-19 14:38:49.487566 | orchestrator | Monday 19 May 2025 14:34:19 +0000 (0:00:00.624) 0:00:01.699 ************ 2025-05-19 14:38:49.487577 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.487588 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.487598 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.487609 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.487619 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.487630 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.487640 | orchestrator | 2025-05-19 14:38:49.487651 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-19 14:38:49.487662 | orchestrator | Monday 19 May 2025 14:34:20 +0000 (0:00:00.652) 0:00:02.352 ************ 2025-05-19 14:38:49.487672 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.487683 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.487694 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.487704 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.487714 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.487725 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.487735 | orchestrator | 2025-05-19 14:38:49.487746 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-19 14:38:49.487756 | orchestrator | Monday 19 May 2025 14:34:22 +0000 (0:00:01.767) 0:00:04.119 ************ 2025-05-19 14:38:49.487767 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.487778 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.487788 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.487798 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.487809 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.487819 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.487830 | orchestrator | 2025-05-19 14:38:49.487840 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-19 14:38:49.487851 | orchestrator | Monday 19 May 2025 14:34:23 +0000 (0:00:01.060) 0:00:05.180 ************ 2025-05-19 14:38:49.487861 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.487872 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.487882 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.487893 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.487903 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.487914 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.487924 | orchestrator | 2025-05-19 14:38:49.487935 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-19 14:38:49.487945 | orchestrator | Monday 19 May 2025 14:34:24 +0000 (0:00:00.908) 0:00:06.089 ************ 2025-05-19 14:38:49.487956 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.487966 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.487977 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.487987 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.487998 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488008 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488018 | orchestrator | 2025-05-19 14:38:49.488029 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-19 14:38:49.488040 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.942) 0:00:07.032 ************ 2025-05-19 14:38:49.488050 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488061 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488071 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488082 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488093 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488103 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488114 | orchestrator | 2025-05-19 14:38:49.488124 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-19 14:38:49.488135 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.647) 0:00:07.680 ************ 2025-05-19 14:38:49.488153 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488164 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488174 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488185 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488196 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488206 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488217 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488228 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488264 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488284 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488307 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488319 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488330 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488341 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488352 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488363 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:38:49.488373 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:38:49.488384 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488395 | orchestrator | 2025-05-19 14:38:49.488406 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-19 14:38:49.488416 | orchestrator | Monday 19 May 2025 14:34:26 +0000 (0:00:00.920) 0:00:08.601 ************ 2025-05-19 14:38:49.488427 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488437 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488448 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488459 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488469 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488480 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488491 | orchestrator | 2025-05-19 14:38:49.488502 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-19 14:38:49.488513 | orchestrator | Monday 19 May 2025 14:34:28 +0000 (0:00:01.512) 0:00:10.113 ************ 2025-05-19 14:38:49.488524 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.488535 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.488546 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.488556 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.488567 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.488578 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.488588 | orchestrator | 2025-05-19 14:38:49.488599 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-19 14:38:49.488610 | orchestrator | Monday 19 May 2025 14:34:28 +0000 (0:00:00.576) 0:00:10.690 ************ 2025-05-19 14:38:49.488620 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.488631 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.488642 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.488652 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.488663 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.488674 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.488684 | orchestrator | 2025-05-19 14:38:49.488695 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-19 14:38:49.488705 | orchestrator | Monday 19 May 2025 14:34:35 +0000 (0:00:06.699) 0:00:17.389 ************ 2025-05-19 14:38:49.488716 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488727 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488745 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488756 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488767 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488777 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488788 | orchestrator | 2025-05-19 14:38:49.488799 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-19 14:38:49.488810 | orchestrator | Monday 19 May 2025 14:34:36 +0000 (0:00:01.183) 0:00:18.572 ************ 2025-05-19 14:38:49.488820 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488831 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488842 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488852 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488863 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488873 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488884 | orchestrator | 2025-05-19 14:38:49.488895 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-19 14:38:49.488907 | orchestrator | Monday 19 May 2025 14:34:38 +0000 (0:00:01.693) 0:00:20.266 ************ 2025-05-19 14:38:49.488918 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.488928 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.488939 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.488949 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.488960 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.488971 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.488981 | orchestrator | 2025-05-19 14:38:49.488992 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-19 14:38:49.489003 | orchestrator | Monday 19 May 2025 14:34:39 +0000 (0:00:00.601) 0:00:20.867 ************ 2025-05-19 14:38:49.489014 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-19 14:38:49.489025 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-19 14:38:49.489036 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.489046 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-19 14:38:49.489057 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-19 14:38:49.489068 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.489078 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-19 14:38:49.489089 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-19 14:38:49.489099 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.489110 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-19 14:38:49.489120 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-19 14:38:49.489131 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.489142 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-19 14:38:49.489152 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-19 14:38:49.489163 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489173 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-19 14:38:49.489184 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-19 14:38:49.489195 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489205 | orchestrator | 2025-05-19 14:38:49.489220 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-19 14:38:49.489261 | orchestrator | Monday 19 May 2025 14:34:40 +0000 (0:00:01.050) 0:00:21.917 ************ 2025-05-19 14:38:49.489275 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.489286 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.489296 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.489307 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.489317 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489328 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489338 | orchestrator | 2025-05-19 14:38:49.489349 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-19 14:38:49.489367 | orchestrator | 2025-05-19 14:38:49.489378 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-19 14:38:49.489389 | orchestrator | Monday 19 May 2025 14:34:41 +0000 (0:00:01.714) 0:00:23.632 ************ 2025-05-19 14:38:49.489400 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.489410 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.489421 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.489432 | orchestrator | 2025-05-19 14:38:49.489443 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-19 14:38:49.489454 | orchestrator | Monday 19 May 2025 14:34:43 +0000 (0:00:01.320) 0:00:24.952 ************ 2025-05-19 14:38:49.489464 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.489475 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.489486 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.489497 | orchestrator | 2025-05-19 14:38:49.489507 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-19 14:38:49.489518 | orchestrator | Monday 19 May 2025 14:34:44 +0000 (0:00:01.259) 0:00:26.212 ************ 2025-05-19 14:38:49.489529 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.489539 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.489550 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.489561 | orchestrator | 2025-05-19 14:38:49.489571 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-19 14:38:49.489582 | orchestrator | Monday 19 May 2025 14:34:45 +0000 (0:00:01.125) 0:00:27.337 ************ 2025-05-19 14:38:49.489593 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.489604 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.489614 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.489625 | orchestrator | 2025-05-19 14:38:49.489635 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-19 14:38:49.489646 | orchestrator | Monday 19 May 2025 14:34:46 +0000 (0:00:00.744) 0:00:28.082 ************ 2025-05-19 14:38:49.489657 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.489668 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489678 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489689 | orchestrator | 2025-05-19 14:38:49.489700 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-19 14:38:49.489710 | orchestrator | Monday 19 May 2025 14:34:46 +0000 (0:00:00.385) 0:00:28.468 ************ 2025-05-19 14:38:49.489721 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:38:49.489732 | orchestrator | 2025-05-19 14:38:49.489743 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-19 14:38:49.489754 | orchestrator | Monday 19 May 2025 14:34:47 +0000 (0:00:00.552) 0:00:29.020 ************ 2025-05-19 14:38:49.489764 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.489775 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.489785 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.489796 | orchestrator | 2025-05-19 14:38:49.489807 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-19 14:38:49.489817 | orchestrator | Monday 19 May 2025 14:34:49 +0000 (0:00:02.030) 0:00:31.051 ************ 2025-05-19 14:38:49.489828 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489839 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489849 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.489860 | orchestrator | 2025-05-19 14:38:49.489871 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-19 14:38:49.489881 | orchestrator | Monday 19 May 2025 14:34:50 +0000 (0:00:00.992) 0:00:32.044 ************ 2025-05-19 14:38:49.489892 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489903 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489913 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.489924 | orchestrator | 2025-05-19 14:38:49.489935 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-19 14:38:49.489952 | orchestrator | Monday 19 May 2025 14:34:51 +0000 (0:00:00.797) 0:00:32.841 ************ 2025-05-19 14:38:49.489963 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.489974 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.489984 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.489995 | orchestrator | 2025-05-19 14:38:49.490006 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-19 14:38:49.490055 | orchestrator | Monday 19 May 2025 14:34:52 +0000 (0:00:01.835) 0:00:34.677 ************ 2025-05-19 14:38:49.490069 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.490080 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.490090 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.490101 | orchestrator | 2025-05-19 14:38:49.490111 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-19 14:38:49.490122 | orchestrator | Monday 19 May 2025 14:34:53 +0000 (0:00:00.408) 0:00:35.085 ************ 2025-05-19 14:38:49.490133 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.490143 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.490154 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.490164 | orchestrator | 2025-05-19 14:38:49.490175 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-19 14:38:49.490186 | orchestrator | Monday 19 May 2025 14:34:53 +0000 (0:00:00.351) 0:00:35.436 ************ 2025-05-19 14:38:49.490196 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.490207 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.490217 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.490228 | orchestrator | 2025-05-19 14:38:49.490275 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-19 14:38:49.490305 | orchestrator | Monday 19 May 2025 14:34:55 +0000 (0:00:02.022) 0:00:37.458 ************ 2025-05-19 14:38:49.490328 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 14:38:49.490340 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 14:38:49.490351 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 14:38:49.490362 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 14:38:49.490373 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 14:38:49.490384 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 14:38:49.490394 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 14:38:49.490405 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 14:38:49.490416 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 14:38:49.490426 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 14:38:49.490437 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 14:38:49.490447 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 14:38:49.490458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 14:38:49.490477 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 14:38:49.490487 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 14:38:49.490498 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.490509 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.490519 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.490530 | orchestrator | 2025-05-19 14:38:49.490541 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-19 14:38:49.490551 | orchestrator | Monday 19 May 2025 14:35:51 +0000 (0:00:56.109) 0:01:33.568 ************ 2025-05-19 14:38:49.490562 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.490572 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.490583 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.490593 | orchestrator | 2025-05-19 14:38:49.490604 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-19 14:38:49.490615 | orchestrator | Monday 19 May 2025 14:35:52 +0000 (0:00:00.349) 0:01:33.918 ************ 2025-05-19 14:38:49.490625 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.490636 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.490646 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.490657 | orchestrator | 2025-05-19 14:38:49.490667 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-19 14:38:49.490678 | orchestrator | Monday 19 May 2025 14:35:53 +0000 (0:00:00.948) 0:01:34.866 ************ 2025-05-19 14:38:49.490688 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.490699 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.490709 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.490720 | orchestrator | 2025-05-19 14:38:49.490730 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-19 14:38:49.490741 | orchestrator | Monday 19 May 2025 14:35:54 +0000 (0:00:01.173) 0:01:36.040 ************ 2025-05-19 14:38:49.490751 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.490762 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.490772 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.490783 | orchestrator | 2025-05-19 14:38:49.490793 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-19 14:38:49.490804 | orchestrator | Monday 19 May 2025 14:36:09 +0000 (0:00:14.769) 0:01:50.810 ************ 2025-05-19 14:38:49.490814 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.490825 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.490835 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.490846 | orchestrator | 2025-05-19 14:38:49.490856 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-19 14:38:49.490867 | orchestrator | Monday 19 May 2025 14:36:09 +0000 (0:00:00.785) 0:01:51.595 ************ 2025-05-19 14:38:49.490878 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.490888 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.490899 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.490909 | orchestrator | 2025-05-19 14:38:49.490920 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-19 14:38:49.490930 | orchestrator | Monday 19 May 2025 14:36:10 +0000 (0:00:00.608) 0:01:52.204 ************ 2025-05-19 14:38:49.490945 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.490956 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.490967 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.490977 | orchestrator | 2025-05-19 14:38:49.490994 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-19 14:38:49.491006 | orchestrator | Monday 19 May 2025 14:36:11 +0000 (0:00:00.633) 0:01:52.837 ************ 2025-05-19 14:38:49.491016 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.491027 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.491043 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.491054 | orchestrator | 2025-05-19 14:38:49.491065 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-19 14:38:49.491075 | orchestrator | Monday 19 May 2025 14:36:12 +0000 (0:00:01.037) 0:01:53.875 ************ 2025-05-19 14:38:49.491086 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.491096 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.491107 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.491117 | orchestrator | 2025-05-19 14:38:49.491128 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-19 14:38:49.491139 | orchestrator | Monday 19 May 2025 14:36:12 +0000 (0:00:00.454) 0:01:54.330 ************ 2025-05-19 14:38:49.491149 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.491160 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.491170 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.491181 | orchestrator | 2025-05-19 14:38:49.491192 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-19 14:38:49.491202 | orchestrator | Monday 19 May 2025 14:36:13 +0000 (0:00:00.724) 0:01:55.054 ************ 2025-05-19 14:38:49.491213 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.491223 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.491234 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.491269 | orchestrator | 2025-05-19 14:38:49.491280 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-19 14:38:49.491291 | orchestrator | Monday 19 May 2025 14:36:13 +0000 (0:00:00.648) 0:01:55.703 ************ 2025-05-19 14:38:49.491302 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.491312 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.491323 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.491334 | orchestrator | 2025-05-19 14:38:49.491344 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-19 14:38:49.491355 | orchestrator | Monday 19 May 2025 14:36:15 +0000 (0:00:01.263) 0:01:56.967 ************ 2025-05-19 14:38:49.491366 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:38:49.491376 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:38:49.491387 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:38:49.491398 | orchestrator | 2025-05-19 14:38:49.491408 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-19 14:38:49.491419 | orchestrator | Monday 19 May 2025 14:36:16 +0000 (0:00:00.799) 0:01:57.766 ************ 2025-05-19 14:38:49.491430 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.491441 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.491451 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.491462 | orchestrator | 2025-05-19 14:38:49.491473 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-19 14:38:49.491483 | orchestrator | Monday 19 May 2025 14:36:16 +0000 (0:00:00.269) 0:01:58.036 ************ 2025-05-19 14:38:49.491494 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.491505 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.491515 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.491526 | orchestrator | 2025-05-19 14:38:49.491536 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-19 14:38:49.491547 | orchestrator | Monday 19 May 2025 14:36:16 +0000 (0:00:00.299) 0:01:58.335 ************ 2025-05-19 14:38:49.491558 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.491569 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.491579 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.491602 | orchestrator | 2025-05-19 14:38:49.491613 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-19 14:38:49.491634 | orchestrator | Monday 19 May 2025 14:36:17 +0000 (0:00:00.857) 0:01:59.192 ************ 2025-05-19 14:38:49.491645 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.491655 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.491666 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.491677 | orchestrator | 2025-05-19 14:38:49.491694 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-19 14:38:49.491705 | orchestrator | Monday 19 May 2025 14:36:18 +0000 (0:00:00.648) 0:01:59.841 ************ 2025-05-19 14:38:49.491716 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 14:38:49.491726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 14:38:49.491737 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 14:38:49.491748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 14:38:49.491759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 14:38:49.491770 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 14:38:49.491781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 14:38:49.491792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 14:38:49.491802 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 14:38:49.491813 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-19 14:38:49.491824 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 14:38:49.491839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 14:38:49.491856 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-19 14:38:49.491868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 14:38:49.491879 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 14:38:49.491890 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 14:38:49.491901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 14:38:49.491912 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 14:38:49.491923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 14:38:49.491933 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 14:38:49.491944 | orchestrator | 2025-05-19 14:38:49.491955 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-19 14:38:49.491966 | orchestrator | 2025-05-19 14:38:49.491977 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-19 14:38:49.491988 | orchestrator | Monday 19 May 2025 14:36:21 +0000 (0:00:02.995) 0:02:02.837 ************ 2025-05-19 14:38:49.491999 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.492009 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.492020 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.492031 | orchestrator | 2025-05-19 14:38:49.492042 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-19 14:38:49.492053 | orchestrator | Monday 19 May 2025 14:36:21 +0000 (0:00:00.496) 0:02:03.334 ************ 2025-05-19 14:38:49.492063 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.492074 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.492085 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.492096 | orchestrator | 2025-05-19 14:38:49.492106 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-19 14:38:49.492117 | orchestrator | Monday 19 May 2025 14:36:22 +0000 (0:00:00.591) 0:02:03.925 ************ 2025-05-19 14:38:49.492139 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.492150 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.492160 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.492171 | orchestrator | 2025-05-19 14:38:49.492182 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-19 14:38:49.492193 | orchestrator | Monday 19 May 2025 14:36:22 +0000 (0:00:00.303) 0:02:04.228 ************ 2025-05-19 14:38:49.492204 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:38:49.492215 | orchestrator | 2025-05-19 14:38:49.492226 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-19 14:38:49.492254 | orchestrator | Monday 19 May 2025 14:36:23 +0000 (0:00:00.651) 0:02:04.880 ************ 2025-05-19 14:38:49.492266 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.492277 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.492288 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.492299 | orchestrator | 2025-05-19 14:38:49.492309 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-19 14:38:49.492320 | orchestrator | Monday 19 May 2025 14:36:23 +0000 (0:00:00.267) 0:02:05.147 ************ 2025-05-19 14:38:49.492330 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.492341 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.492351 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.492362 | orchestrator | 2025-05-19 14:38:49.492372 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-19 14:38:49.492383 | orchestrator | Monday 19 May 2025 14:36:23 +0000 (0:00:00.284) 0:02:05.432 ************ 2025-05-19 14:38:49.492393 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.492404 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.492415 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.492425 | orchestrator | 2025-05-19 14:38:49.492435 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-19 14:38:49.492446 | orchestrator | Monday 19 May 2025 14:36:23 +0000 (0:00:00.294) 0:02:05.726 ************ 2025-05-19 14:38:49.492456 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.492467 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.492477 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.492488 | orchestrator | 2025-05-19 14:38:49.492498 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-19 14:38:49.492509 | orchestrator | Monday 19 May 2025 14:36:25 +0000 (0:00:01.416) 0:02:07.143 ************ 2025-05-19 14:38:49.492519 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:38:49.492530 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:38:49.492540 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:38:49.492550 | orchestrator | 2025-05-19 14:38:49.492561 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-19 14:38:49.492572 | orchestrator | 2025-05-19 14:38:49.492582 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-19 14:38:49.492593 | orchestrator | Monday 19 May 2025 14:36:34 +0000 (0:00:09.468) 0:02:16.612 ************ 2025-05-19 14:38:49.492603 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.492614 | orchestrator | 2025-05-19 14:38:49.492624 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-19 14:38:49.492635 | orchestrator | Monday 19 May 2025 14:36:35 +0000 (0:00:00.690) 0:02:17.302 ************ 2025-05-19 14:38:49.492645 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.492656 | orchestrator | 2025-05-19 14:38:49.492666 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 14:38:49.492677 | orchestrator | Monday 19 May 2025 14:36:35 +0000 (0:00:00.381) 0:02:17.684 ************ 2025-05-19 14:38:49.492692 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 14:38:49.492703 | orchestrator | 2025-05-19 14:38:49.492720 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 14:38:49.492739 | orchestrator | Monday 19 May 2025 14:36:36 +0000 (0:00:00.884) 0:02:18.568 ************ 2025-05-19 14:38:49.492749 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.492760 | orchestrator | 2025-05-19 14:38:49.492771 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-19 14:38:49.492781 | orchestrator | Monday 19 May 2025 14:36:37 +0000 (0:00:00.815) 0:02:19.384 ************ 2025-05-19 14:38:49.492792 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.492803 | orchestrator | 2025-05-19 14:38:49.492813 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-19 14:38:49.492824 | orchestrator | Monday 19 May 2025 14:36:38 +0000 (0:00:00.546) 0:02:19.931 ************ 2025-05-19 14:38:49.492834 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 14:38:49.492845 | orchestrator | 2025-05-19 14:38:49.492856 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-19 14:38:49.492866 | orchestrator | Monday 19 May 2025 14:36:39 +0000 (0:00:01.462) 0:02:21.393 ************ 2025-05-19 14:38:49.492877 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 14:38:49.492888 | orchestrator | 2025-05-19 14:38:49.492898 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-19 14:38:49.492909 | orchestrator | Monday 19 May 2025 14:36:40 +0000 (0:00:00.771) 0:02:22.165 ************ 2025-05-19 14:38:49.492919 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.492930 | orchestrator | 2025-05-19 14:38:49.492941 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-19 14:38:49.492952 | orchestrator | Monday 19 May 2025 14:36:40 +0000 (0:00:00.406) 0:02:22.571 ************ 2025-05-19 14:38:49.492962 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.492973 | orchestrator | 2025-05-19 14:38:49.492984 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-19 14:38:49.492995 | orchestrator | 2025-05-19 14:38:49.493005 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-19 14:38:49.493016 | orchestrator | Monday 19 May 2025 14:36:41 +0000 (0:00:00.431) 0:02:23.002 ************ 2025-05-19 14:38:49.493027 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.493037 | orchestrator | 2025-05-19 14:38:49.493048 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-19 14:38:49.493059 | orchestrator | Monday 19 May 2025 14:36:41 +0000 (0:00:00.129) 0:02:23.131 ************ 2025-05-19 14:38:49.493069 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:38:49.493080 | orchestrator | 2025-05-19 14:38:49.493091 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-19 14:38:49.493101 | orchestrator | Monday 19 May 2025 14:36:41 +0000 (0:00:00.197) 0:02:23.329 ************ 2025-05-19 14:38:49.493112 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.493123 | orchestrator | 2025-05-19 14:38:49.493134 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-19 14:38:49.493145 | orchestrator | Monday 19 May 2025 14:36:42 +0000 (0:00:01.232) 0:02:24.562 ************ 2025-05-19 14:38:49.493155 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.493166 | orchestrator | 2025-05-19 14:38:49.493177 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-19 14:38:49.493187 | orchestrator | Monday 19 May 2025 14:36:44 +0000 (0:00:01.479) 0:02:26.041 ************ 2025-05-19 14:38:49.493198 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.493209 | orchestrator | 2025-05-19 14:38:49.493219 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-19 14:38:49.493230 | orchestrator | Monday 19 May 2025 14:36:45 +0000 (0:00:00.797) 0:02:26.839 ************ 2025-05-19 14:38:49.493259 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.493271 | orchestrator | 2025-05-19 14:38:49.493281 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-19 14:38:49.493292 | orchestrator | Monday 19 May 2025 14:36:45 +0000 (0:00:00.515) 0:02:27.355 ************ 2025-05-19 14:38:49.493309 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.493320 | orchestrator | 2025-05-19 14:38:49.493331 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-19 14:38:49.493341 | orchestrator | Monday 19 May 2025 14:36:52 +0000 (0:00:06.451) 0:02:33.806 ************ 2025-05-19 14:38:49.493352 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.493363 | orchestrator | 2025-05-19 14:38:49.493373 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-19 14:38:49.493384 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:11.455) 0:02:45.261 ************ 2025-05-19 14:38:49.493394 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.493405 | orchestrator | 2025-05-19 14:38:49.493416 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-19 14:38:49.493426 | orchestrator | 2025-05-19 14:38:49.493437 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-19 14:38:49.493448 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:00.488) 0:02:45.750 ************ 2025-05-19 14:38:49.493459 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.493470 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.493480 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.493491 | orchestrator | 2025-05-19 14:38:49.493501 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-19 14:38:49.493512 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:00.365) 0:02:46.116 ************ 2025-05-19 14:38:49.493523 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493533 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.493544 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.493554 | orchestrator | 2025-05-19 14:38:49.493565 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-19 14:38:49.493576 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:00.318) 0:02:46.434 ************ 2025-05-19 14:38:49.493586 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:38:49.493597 | orchestrator | 2025-05-19 14:38:49.493612 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-19 14:38:49.493629 | orchestrator | Monday 19 May 2025 14:37:05 +0000 (0:00:00.429) 0:02:46.863 ************ 2025-05-19 14:38:49.493641 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.493651 | orchestrator | 2025-05-19 14:38:49.493662 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-19 14:38:49.493672 | orchestrator | Monday 19 May 2025 14:37:05 +0000 (0:00:00.811) 0:02:47.675 ************ 2025-05-19 14:38:49.493683 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.493693 | orchestrator | 2025-05-19 14:38:49.493704 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-19 14:38:49.493715 | orchestrator | Monday 19 May 2025 14:37:06 +0000 (0:00:00.780) 0:02:48.455 ************ 2025-05-19 14:38:49.493725 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493736 | orchestrator | 2025-05-19 14:38:49.493746 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-19 14:38:49.493757 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:00.382) 0:02:48.838 ************ 2025-05-19 14:38:49.493767 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.493778 | orchestrator | 2025-05-19 14:38:49.493788 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-19 14:38:49.493799 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:00.879) 0:02:49.717 ************ 2025-05-19 14:38:49.493810 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493820 | orchestrator | 2025-05-19 14:38:49.493831 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-19 14:38:49.493842 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:00.183) 0:02:49.901 ************ 2025-05-19 14:38:49.493852 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493863 | orchestrator | 2025-05-19 14:38:49.493892 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-19 14:38:49.493903 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:00.175) 0:02:50.076 ************ 2025-05-19 14:38:49.493914 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493924 | orchestrator | 2025-05-19 14:38:49.493935 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-19 14:38:49.493945 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:00.164) 0:02:50.241 ************ 2025-05-19 14:38:49.493956 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.493966 | orchestrator | 2025-05-19 14:38:49.493977 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-19 14:38:49.493987 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:00.154) 0:02:50.395 ************ 2025-05-19 14:38:49.493998 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.494008 | orchestrator | 2025-05-19 14:38:49.494049 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-19 14:38:49.494060 | orchestrator | Monday 19 May 2025 14:37:12 +0000 (0:00:03.817) 0:02:54.214 ************ 2025-05-19 14:38:49.494071 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-19 14:38:49.494081 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-19 14:38:49.494093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-19 14:38:49.494103 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-19 14:38:49.494114 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-19 14:38:49.494125 | orchestrator | 2025-05-19 14:38:49.494136 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-19 14:38:49.494146 | orchestrator | Monday 19 May 2025 14:38:21 +0000 (0:01:09.074) 0:04:03.289 ************ 2025-05-19 14:38:49.494157 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.494168 | orchestrator | 2025-05-19 14:38:49.494179 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-19 14:38:49.494189 | orchestrator | Monday 19 May 2025 14:38:22 +0000 (0:00:01.227) 0:04:04.516 ************ 2025-05-19 14:38:49.494200 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.494211 | orchestrator | 2025-05-19 14:38:49.494221 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-19 14:38:49.494232 | orchestrator | Monday 19 May 2025 14:38:24 +0000 (0:00:01.677) 0:04:06.194 ************ 2025-05-19 14:38:49.494373 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 14:38:49.494393 | orchestrator | 2025-05-19 14:38:49.494405 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-19 14:38:49.494415 | orchestrator | Monday 19 May 2025 14:38:25 +0000 (0:00:01.191) 0:04:07.385 ************ 2025-05-19 14:38:49.494424 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.494434 | orchestrator | 2025-05-19 14:38:49.494443 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-19 14:38:49.494453 | orchestrator | Monday 19 May 2025 14:38:25 +0000 (0:00:00.211) 0:04:07.597 ************ 2025-05-19 14:38:49.494462 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-19 14:38:49.494472 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-19 14:38:49.494481 | orchestrator | 2025-05-19 14:38:49.494491 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-19 14:38:49.494500 | orchestrator | Monday 19 May 2025 14:38:28 +0000 (0:00:02.829) 0:04:10.427 ************ 2025-05-19 14:38:49.494510 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.494519 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.494529 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.494538 | orchestrator | 2025-05-19 14:38:49.494548 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-19 14:38:49.494566 | orchestrator | Monday 19 May 2025 14:38:29 +0000 (0:00:00.370) 0:04:10.798 ************ 2025-05-19 14:38:49.494576 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.494592 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.494601 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.494611 | orchestrator | 2025-05-19 14:38:49.494632 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-19 14:38:49.494642 | orchestrator | 2025-05-19 14:38:49.494651 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-19 14:38:49.494660 | orchestrator | Monday 19 May 2025 14:38:29 +0000 (0:00:00.871) 0:04:11.669 ************ 2025-05-19 14:38:49.494670 | orchestrator | ok: [testbed-manager] 2025-05-19 14:38:49.494679 | orchestrator | 2025-05-19 14:38:49.494689 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-19 14:38:49.494698 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.119) 0:04:11.789 ************ 2025-05-19 14:38:49.494708 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 14:38:49.494718 | orchestrator | 2025-05-19 14:38:49.494727 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-19 14:38:49.494737 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.337) 0:04:12.127 ************ 2025-05-19 14:38:49.494746 | orchestrator | changed: [testbed-manager] 2025-05-19 14:38:49.494756 | orchestrator | 2025-05-19 14:38:49.494765 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-19 14:38:49.494775 | orchestrator | 2025-05-19 14:38:49.494784 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-19 14:38:49.494793 | orchestrator | Monday 19 May 2025 14:38:35 +0000 (0:00:05.268) 0:04:17.395 ************ 2025-05-19 14:38:49.494803 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:38:49.494812 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:38:49.494822 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:38:49.494831 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:38:49.494841 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:38:49.494850 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:38:49.494859 | orchestrator | 2025-05-19 14:38:49.494869 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-19 14:38:49.494879 | orchestrator | Monday 19 May 2025 14:38:36 +0000 (0:00:00.512) 0:04:17.907 ************ 2025-05-19 14:38:49.494888 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 14:38:49.494898 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 14:38:49.494907 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 14:38:49.494917 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 14:38:49.494926 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 14:38:49.494936 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 14:38:49.494945 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 14:38:49.494955 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 14:38:49.494964 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 14:38:49.494974 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 14:38:49.494983 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 14:38:49.494993 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 14:38:49.495002 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 14:38:49.495011 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 14:38:49.495027 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 14:38:49.495037 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 14:38:49.495046 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 14:38:49.495055 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 14:38:49.495065 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 14:38:49.495074 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 14:38:49.495084 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 14:38:49.495093 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 14:38:49.495102 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 14:38:49.495112 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 14:38:49.495121 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 14:38:49.495130 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 14:38:49.495140 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 14:38:49.495149 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 14:38:49.495159 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 14:38:49.495172 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 14:38:49.495181 | orchestrator | 2025-05-19 14:38:49.495196 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-19 14:38:49.495206 | orchestrator | Monday 19 May 2025 14:38:47 +0000 (0:00:11.532) 0:04:29.440 ************ 2025-05-19 14:38:49.495216 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.495225 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.495255 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.495266 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.495275 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.495286 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.495295 | orchestrator | 2025-05-19 14:38:49.495305 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-19 14:38:49.495314 | orchestrator | Monday 19 May 2025 14:38:48 +0000 (0:00:00.374) 0:04:29.814 ************ 2025-05-19 14:38:49.495324 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:38:49.495333 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:38:49.495343 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:38:49.495352 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:38:49.495361 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:38:49.495371 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:38:49.495380 | orchestrator | 2025-05-19 14:38:49.495389 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:38:49.495400 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:38:49.495410 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-19 14:38:49.495420 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-19 14:38:49.495430 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-19 14:38:49.495446 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 14:38:49.495455 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 14:38:49.495465 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 14:38:49.495474 | orchestrator | 2025-05-19 14:38:49.495484 | orchestrator | 2025-05-19 14:38:49.495493 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:38:49.495503 | orchestrator | Monday 19 May 2025 14:38:48 +0000 (0:00:00.466) 0:04:30.281 ************ 2025-05-19 14:38:49.495513 | orchestrator | =============================================================================== 2025-05-19 14:38:49.495522 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 69.07s 2025-05-19 14:38:49.495532 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.11s 2025-05-19 14:38:49.495541 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.77s 2025-05-19 14:38:49.495551 | orchestrator | Manage labels ---------------------------------------------------------- 11.53s 2025-05-19 14:38:49.495560 | orchestrator | kubectl : Install required packages ------------------------------------ 11.46s 2025-05-19 14:38:49.495570 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.47s 2025-05-19 14:38:49.495579 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.70s 2025-05-19 14:38:49.495589 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.45s 2025-05-19 14:38:49.495598 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.27s 2025-05-19 14:38:49.495607 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 3.82s 2025-05-19 14:38:49.495617 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2025-05-19 14:38:49.495627 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.83s 2025-05-19 14:38:49.495636 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.03s 2025-05-19 14:38:49.495645 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.02s 2025-05-19 14:38:49.495655 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.84s 2025-05-19 14:38:49.495665 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.77s 2025-05-19 14:38:49.495674 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.71s 2025-05-19 14:38:49.495683 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.69s 2025-05-19 14:38:49.495693 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.68s 2025-05-19 14:38:49.495702 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.51s 2025-05-19 14:38:49.495712 | orchestrator | 2025-05-19 14:38:49 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:49.495721 | orchestrator | 2025-05-19 14:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:52.525550 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task f7d91f25-29ec-44cb-a1a4-83d47479f495 is in state STARTED 2025-05-19 14:38:52.526787 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:52.529024 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:52.530822 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:52.532775 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task 341b4e23-2d47-4eaf-98c5-382b72e72d02 is in state STARTED 2025-05-19 14:38:52.534382 | orchestrator | 2025-05-19 14:38:52 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:52.534411 | orchestrator | 2025-05-19 14:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:55.580713 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task f7d91f25-29ec-44cb-a1a4-83d47479f495 is in state STARTED 2025-05-19 14:38:55.581010 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:55.581606 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:55.583005 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:55.583045 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task 341b4e23-2d47-4eaf-98c5-382b72e72d02 is in state SUCCESS 2025-05-19 14:38:55.583478 | orchestrator | 2025-05-19 14:38:55 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:55.583560 | orchestrator | 2025-05-19 14:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:38:58.619720 | orchestrator | 2025-05-19 14:38:58 | INFO  | Task f7d91f25-29ec-44cb-a1a4-83d47479f495 is in state SUCCESS 2025-05-19 14:38:58.622215 | orchestrator | 2025-05-19 14:38:58 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:38:58.624127 | orchestrator | 2025-05-19 14:38:58 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:38:58.626315 | orchestrator | 2025-05-19 14:38:58 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:38:58.627608 | orchestrator | 2025-05-19 14:38:58 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:38:58.627942 | orchestrator | 2025-05-19 14:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:01.663547 | orchestrator | 2025-05-19 14:39:01 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:01.664455 | orchestrator | 2025-05-19 14:39:01 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:01.664755 | orchestrator | 2025-05-19 14:39:01 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:01.667443 | orchestrator | 2025-05-19 14:39:01 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:01.667494 | orchestrator | 2025-05-19 14:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:04.713142 | orchestrator | 2025-05-19 14:39:04 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:04.714306 | orchestrator | 2025-05-19 14:39:04 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:04.715504 | orchestrator | 2025-05-19 14:39:04 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:04.717201 | orchestrator | 2025-05-19 14:39:04 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:04.717264 | orchestrator | 2025-05-19 14:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:07.762702 | orchestrator | 2025-05-19 14:39:07 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:07.762805 | orchestrator | 2025-05-19 14:39:07 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:07.766539 | orchestrator | 2025-05-19 14:39:07 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:07.766569 | orchestrator | 2025-05-19 14:39:07 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:07.766581 | orchestrator | 2025-05-19 14:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:10.809187 | orchestrator | 2025-05-19 14:39:10 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:10.811324 | orchestrator | 2025-05-19 14:39:10 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:10.813386 | orchestrator | 2025-05-19 14:39:10 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:10.815101 | orchestrator | 2025-05-19 14:39:10 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:10.815594 | orchestrator | 2025-05-19 14:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:13.865032 | orchestrator | 2025-05-19 14:39:13 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:13.867130 | orchestrator | 2025-05-19 14:39:13 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:13.869304 | orchestrator | 2025-05-19 14:39:13 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:13.871373 | orchestrator | 2025-05-19 14:39:13 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:13.871402 | orchestrator | 2025-05-19 14:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:16.908216 | orchestrator | 2025-05-19 14:39:16 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:16.908383 | orchestrator | 2025-05-19 14:39:16 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:16.911300 | orchestrator | 2025-05-19 14:39:16 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:16.911348 | orchestrator | 2025-05-19 14:39:16 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:16.911368 | orchestrator | 2025-05-19 14:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:19.955518 | orchestrator | 2025-05-19 14:39:19 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:19.956755 | orchestrator | 2025-05-19 14:39:19 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:19.959143 | orchestrator | 2025-05-19 14:39:19 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:19.960543 | orchestrator | 2025-05-19 14:39:19 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:19.960578 | orchestrator | 2025-05-19 14:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:22.991978 | orchestrator | 2025-05-19 14:39:22 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:22.992498 | orchestrator | 2025-05-19 14:39:22 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:22.993435 | orchestrator | 2025-05-19 14:39:22 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:22.994471 | orchestrator | 2025-05-19 14:39:22 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:22.994521 | orchestrator | 2025-05-19 14:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:26.048132 | orchestrator | 2025-05-19 14:39:26 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:26.048241 | orchestrator | 2025-05-19 14:39:26 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:26.048275 | orchestrator | 2025-05-19 14:39:26 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:26.048330 | orchestrator | 2025-05-19 14:39:26 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:26.048344 | orchestrator | 2025-05-19 14:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:29.085124 | orchestrator | 2025-05-19 14:39:29 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state STARTED 2025-05-19 14:39:29.086787 | orchestrator | 2025-05-19 14:39:29 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:29.086838 | orchestrator | 2025-05-19 14:39:29 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:29.087158 | orchestrator | 2025-05-19 14:39:29 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:29.087295 | orchestrator | 2025-05-19 14:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:32.116719 | orchestrator | 2025-05-19 14:39:32.116806 | orchestrator | 2025-05-19 14:39:32.116821 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-19 14:39:32.116834 | orchestrator | 2025-05-19 14:39:32.116845 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 14:39:32.116856 | orchestrator | Monday 19 May 2025 14:38:52 +0000 (0:00:00.114) 0:00:00.114 ************ 2025-05-19 14:39:32.116867 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 14:39:32.116877 | orchestrator | 2025-05-19 14:39:32.116888 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 14:39:32.116899 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:00.671) 0:00:00.786 ************ 2025-05-19 14:39:32.116909 | orchestrator | changed: [testbed-manager] 2025-05-19 14:39:32.116921 | orchestrator | 2025-05-19 14:39:32.116939 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-19 14:39:32.116959 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:00.879) 0:00:01.666 ************ 2025-05-19 14:39:32.116980 | orchestrator | changed: [testbed-manager] 2025-05-19 14:39:32.117042 | orchestrator | 2025-05-19 14:39:32.117054 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:39:32.117065 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:39:32.117078 | orchestrator | 2025-05-19 14:39:32.117088 | orchestrator | 2025-05-19 14:39:32.117099 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:39:32.117110 | orchestrator | Monday 19 May 2025 14:38:54 +0000 (0:00:00.310) 0:00:01.977 ************ 2025-05-19 14:39:32.117121 | orchestrator | =============================================================================== 2025-05-19 14:39:32.117131 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.88s 2025-05-19 14:39:32.117142 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2025-05-19 14:39:32.117152 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.31s 2025-05-19 14:39:32.117163 | orchestrator | 2025-05-19 14:39:32.117174 | orchestrator | 2025-05-19 14:39:32.117184 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-19 14:39:32.117195 | orchestrator | 2025-05-19 14:39:32.117206 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-19 14:39:32.117216 | orchestrator | Monday 19 May 2025 14:38:52 +0000 (0:00:00.159) 0:00:00.159 ************ 2025-05-19 14:39:32.117227 | orchestrator | ok: [testbed-manager] 2025-05-19 14:39:32.117260 | orchestrator | 2025-05-19 14:39:32.117272 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-19 14:39:32.117285 | orchestrator | Monday 19 May 2025 14:38:52 +0000 (0:00:00.450) 0:00:00.609 ************ 2025-05-19 14:39:32.117297 | orchestrator | ok: [testbed-manager] 2025-05-19 14:39:32.117308 | orchestrator | 2025-05-19 14:39:32.117360 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 14:39:32.117372 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:00.457) 0:00:01.067 ************ 2025-05-19 14:39:32.117382 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 14:39:32.117393 | orchestrator | 2025-05-19 14:39:32.117404 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 14:39:32.117414 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:00.624) 0:00:01.692 ************ 2025-05-19 14:39:32.117425 | orchestrator | changed: [testbed-manager] 2025-05-19 14:39:32.117436 | orchestrator | 2025-05-19 14:39:32.117446 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-19 14:39:32.117457 | orchestrator | Monday 19 May 2025 14:38:54 +0000 (0:00:00.964) 0:00:02.656 ************ 2025-05-19 14:39:32.117467 | orchestrator | changed: [testbed-manager] 2025-05-19 14:39:32.117478 | orchestrator | 2025-05-19 14:39:32.117489 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-19 14:39:32.117499 | orchestrator | Monday 19 May 2025 14:38:55 +0000 (0:00:00.565) 0:00:03.221 ************ 2025-05-19 14:39:32.117510 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 14:39:32.117521 | orchestrator | 2025-05-19 14:39:32.117531 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-19 14:39:32.117542 | orchestrator | Monday 19 May 2025 14:38:56 +0000 (0:00:01.514) 0:00:04.735 ************ 2025-05-19 14:39:32.117552 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 14:39:32.117563 | orchestrator | 2025-05-19 14:39:32.117574 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-19 14:39:32.117584 | orchestrator | Monday 19 May 2025 14:38:57 +0000 (0:00:00.678) 0:00:05.414 ************ 2025-05-19 14:39:32.117595 | orchestrator | ok: [testbed-manager] 2025-05-19 14:39:32.117605 | orchestrator | 2025-05-19 14:39:32.117616 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-19 14:39:32.117638 | orchestrator | Monday 19 May 2025 14:38:57 +0000 (0:00:00.273) 0:00:05.687 ************ 2025-05-19 14:39:32.117650 | orchestrator | ok: [testbed-manager] 2025-05-19 14:39:32.117660 | orchestrator | 2025-05-19 14:39:32.117671 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:39:32.117682 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:39:32.117693 | orchestrator | 2025-05-19 14:39:32.117703 | orchestrator | 2025-05-19 14:39:32.117714 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:39:32.117725 | orchestrator | Monday 19 May 2025 14:38:58 +0000 (0:00:00.281) 0:00:05.969 ************ 2025-05-19 14:39:32.117735 | orchestrator | =============================================================================== 2025-05-19 14:39:32.117746 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2025-05-19 14:39:32.117757 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.96s 2025-05-19 14:39:32.117768 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.68s 2025-05-19 14:39:32.117796 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.62s 2025-05-19 14:39:32.117807 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.57s 2025-05-19 14:39:32.117818 | orchestrator | Create .kube directory -------------------------------------------------- 0.46s 2025-05-19 14:39:32.117829 | orchestrator | Get home directory of operator user ------------------------------------- 0.45s 2025-05-19 14:39:32.117839 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-05-19 14:39:32.117858 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.27s 2025-05-19 14:39:32.117869 | orchestrator | 2025-05-19 14:39:32.117880 | orchestrator | 2025-05-19 14:39:32.117890 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-19 14:39:32.117901 | orchestrator | 2025-05-19 14:39:32.117911 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 14:39:32.117922 | orchestrator | Monday 19 May 2025 14:37:16 +0000 (0:00:00.127) 0:00:00.127 ************ 2025-05-19 14:39:32.117932 | orchestrator | ok: [localhost] => { 2025-05-19 14:39:32.117944 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-19 14:39:32.117955 | orchestrator | } 2025-05-19 14:39:32.117966 | orchestrator | 2025-05-19 14:39:32.117977 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-19 14:39:32.117987 | orchestrator | Monday 19 May 2025 14:37:16 +0000 (0:00:00.038) 0:00:00.165 ************ 2025-05-19 14:39:32.117999 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-19 14:39:32.118010 | orchestrator | ...ignoring 2025-05-19 14:39:32.118068 | orchestrator | 2025-05-19 14:39:32.118080 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-19 14:39:32.118091 | orchestrator | Monday 19 May 2025 14:37:18 +0000 (0:00:02.797) 0:00:02.963 ************ 2025-05-19 14:39:32.118101 | orchestrator | skipping: [localhost] 2025-05-19 14:39:32.118112 | orchestrator | 2025-05-19 14:39:32.118123 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-19 14:39:32.118133 | orchestrator | Monday 19 May 2025 14:37:18 +0000 (0:00:00.039) 0:00:03.002 ************ 2025-05-19 14:39:32.118144 | orchestrator | ok: [localhost] 2025-05-19 14:39:32.118155 | orchestrator | 2025-05-19 14:39:32.118165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:39:32.118176 | orchestrator | 2025-05-19 14:39:32.118187 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:39:32.118197 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.155) 0:00:03.158 ************ 2025-05-19 14:39:32.118208 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:39:32.118219 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:39:32.118229 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:39:32.118240 | orchestrator | 2025-05-19 14:39:32.118250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:39:32.118261 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.359) 0:00:03.518 ************ 2025-05-19 14:39:32.118272 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-19 14:39:32.118283 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-19 14:39:32.118293 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-19 14:39:32.118304 | orchestrator | 2025-05-19 14:39:32.118336 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-19 14:39:32.118350 | orchestrator | 2025-05-19 14:39:32.118360 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 14:39:32.118371 | orchestrator | Monday 19 May 2025 14:37:20 +0000 (0:00:00.811) 0:00:04.329 ************ 2025-05-19 14:39:32.118382 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:39:32.118393 | orchestrator | 2025-05-19 14:39:32.118404 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 14:39:32.118414 | orchestrator | Monday 19 May 2025 14:37:21 +0000 (0:00:01.395) 0:00:05.725 ************ 2025-05-19 14:39:32.118425 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:39:32.118436 | orchestrator | 2025-05-19 14:39:32.118447 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-19 14:39:32.118457 | orchestrator | Monday 19 May 2025 14:37:22 +0000 (0:00:01.199) 0:00:06.925 ************ 2025-05-19 14:39:32.118475 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118486 | orchestrator | 2025-05-19 14:39:32.118497 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-19 14:39:32.118507 | orchestrator | Monday 19 May 2025 14:37:23 +0000 (0:00:00.301) 0:00:07.226 ************ 2025-05-19 14:39:32.118518 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118529 | orchestrator | 2025-05-19 14:39:32.118545 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-19 14:39:32.118556 | orchestrator | Monday 19 May 2025 14:37:23 +0000 (0:00:00.593) 0:00:07.819 ************ 2025-05-19 14:39:32.118566 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118577 | orchestrator | 2025-05-19 14:39:32.118595 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-19 14:39:32.118615 | orchestrator | Monday 19 May 2025 14:37:24 +0000 (0:00:00.366) 0:00:08.186 ************ 2025-05-19 14:39:32.118634 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118653 | orchestrator | 2025-05-19 14:39:32.118664 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 14:39:32.118675 | orchestrator | Monday 19 May 2025 14:37:24 +0000 (0:00:00.505) 0:00:08.692 ************ 2025-05-19 14:39:32.118686 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:39:32.118696 | orchestrator | 2025-05-19 14:39:32.118707 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 14:39:32.118727 | orchestrator | Monday 19 May 2025 14:37:25 +0000 (0:00:00.744) 0:00:09.436 ************ 2025-05-19 14:39:32.118738 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:39:32.118749 | orchestrator | 2025-05-19 14:39:32.118760 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-19 14:39:32.118771 | orchestrator | Monday 19 May 2025 14:37:26 +0000 (0:00:00.808) 0:00:10.245 ************ 2025-05-19 14:39:32.118781 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118792 | orchestrator | 2025-05-19 14:39:32.118803 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-19 14:39:32.118814 | orchestrator | Monday 19 May 2025 14:37:26 +0000 (0:00:00.320) 0:00:10.566 ************ 2025-05-19 14:39:32.118824 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.118835 | orchestrator | 2025-05-19 14:39:32.118846 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-19 14:39:32.118856 | orchestrator | Monday 19 May 2025 14:37:26 +0000 (0:00:00.299) 0:00:10.865 ************ 2025-05-19 14:39:32.118873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.118890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.118922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.118935 | orchestrator | 2025-05-19 14:39:32.118946 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-19 14:39:32.118957 | orchestrator | Monday 19 May 2025 14:37:27 +0000 (0:00:00.896) 0:00:11.762 ************ 2025-05-19 14:39:32.118977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.118990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.119008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.119020 | orchestrator | 2025-05-19 14:39:32.119031 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-19 14:39:32.119041 | orchestrator | Monday 19 May 2025 14:37:29 +0000 (0:00:02.055) 0:00:13.818 ************ 2025-05-19 14:39:32.119052 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 14:39:32.119067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 14:39:32.119078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 14:39:32.119089 | orchestrator | 2025-05-19 14:39:32.119099 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-19 14:39:32.119110 | orchestrator | Monday 19 May 2025 14:37:31 +0000 (0:00:02.192) 0:00:16.010 ************ 2025-05-19 14:39:32.119120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 14:39:32.119131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 14:39:32.119142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 14:39:32.119152 | orchestrator | 2025-05-19 14:39:32.119163 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-19 14:39:32.119179 | orchestrator | Monday 19 May 2025 14:37:34 +0000 (0:00:02.849) 0:00:18.859 ************ 2025-05-19 14:39:32.119191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 14:39:32.119201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 14:39:32.119212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 14:39:32.119223 | orchestrator | 2025-05-19 14:39:32.119234 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-19 14:39:32.119244 | orchestrator | Monday 19 May 2025 14:37:36 +0000 (0:00:01.531) 0:00:20.391 ************ 2025-05-19 14:39:32.119255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 14:39:32.119266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 14:39:32.119277 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 14:39:32.119287 | orchestrator | 2025-05-19 14:39:32.119298 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-19 14:39:32.119309 | orchestrator | Monday 19 May 2025 14:37:38 +0000 (0:00:02.493) 0:00:22.885 ************ 2025-05-19 14:39:32.119344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 14:39:32.119362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 14:39:32.119373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 14:39:32.119383 | orchestrator | 2025-05-19 14:39:32.119394 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-19 14:39:32.119405 | orchestrator | Monday 19 May 2025 14:37:40 +0000 (0:00:01.637) 0:00:24.522 ************ 2025-05-19 14:39:32.119415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 14:39:32.119426 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 14:39:32.119437 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 14:39:32.119447 | orchestrator | 2025-05-19 14:39:32.119458 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 14:39:32.119469 | orchestrator | Monday 19 May 2025 14:37:41 +0000 (0:00:01.414) 0:00:25.937 ************ 2025-05-19 14:39:32.119479 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.119490 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:39:32.119501 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:39:32.119511 | orchestrator | 2025-05-19 14:39:32.119522 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-19 14:39:32.119532 | orchestrator | Monday 19 May 2025 14:37:42 +0000 (0:00:00.400) 0:00:26.337 ************ 2025-05-19 14:39:32.119544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.119569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.119583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:39:32.119606 | orchestrator | 2025-05-19 14:39:32.119618 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-19 14:39:32.119628 | orchestrator | Monday 19 May 2025 14:37:43 +0000 (0:00:01.531) 0:00:27.869 ************ 2025-05-19 14:39:32.119639 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:39:32.119650 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:39:32.119660 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:39:32.119671 | orchestrator | 2025-05-19 14:39:32.119681 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-19 14:39:32.119692 | orchestrator | Monday 19 May 2025 14:37:44 +0000 (0:00:00.769) 0:00:28.638 ************ 2025-05-19 14:39:32.119703 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:39:32.119713 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:39:32.119724 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:39:32.119734 | orchestrator | 2025-05-19 14:39:32.119745 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-19 14:39:32.119756 | orchestrator | Monday 19 May 2025 14:37:52 +0000 (0:00:08.399) 0:00:37.037 ************ 2025-05-19 14:39:32.119766 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:39:32.119777 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:39:32.119787 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:39:32.119798 | orchestrator | 2025-05-19 14:39:32.119809 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 14:39:32.119819 | orchestrator | 2025-05-19 14:39:32.119830 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 14:39:32.119840 | orchestrator | Monday 19 May 2025 14:37:53 +0000 (0:00:00.379) 0:00:37.417 ************ 2025-05-19 14:39:32.119851 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:39:32.119862 | orchestrator | 2025-05-19 14:39:32.119872 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 14:39:32.119883 | orchestrator | Monday 19 May 2025 14:37:53 +0000 (0:00:00.566) 0:00:37.983 ************ 2025-05-19 14:39:32.119893 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:39:32.119904 | orchestrator | 2025-05-19 14:39:32.119915 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 14:39:32.119925 | orchestrator | Monday 19 May 2025 14:37:54 +0000 (0:00:00.201) 0:00:38.185 ************ 2025-05-19 14:39:32.119936 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:39:32.119946 | orchestrator | 2025-05-19 14:39:32.119957 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 14:39:32.119968 | orchestrator | Monday 19 May 2025 14:37:55 +0000 (0:00:01.695) 0:00:39.880 ************ 2025-05-19 14:39:32.119978 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:39:32.119989 | orchestrator | 2025-05-19 14:39:32.119999 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 14:39:32.120010 | orchestrator | 2025-05-19 14:39:32.120020 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 14:39:32.120031 | orchestrator | Monday 19 May 2025 14:38:50 +0000 (0:00:54.580) 0:01:34.461 ************ 2025-05-19 14:39:32.120041 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:39:32.120052 | orchestrator | 2025-05-19 14:39:32.120063 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 14:39:32.120083 | orchestrator | Monday 19 May 2025 14:38:50 +0000 (0:00:00.574) 0:01:35.035 ************ 2025-05-19 14:39:32.120094 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:39:32.120105 | orchestrator | 2025-05-19 14:39:32.120115 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 14:39:32.120126 | orchestrator | Monday 19 May 2025 14:38:51 +0000 (0:00:00.291) 0:01:35.327 ************ 2025-05-19 14:39:32.120137 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:39:32.120147 | orchestrator | 2025-05-19 14:39:32.120158 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 14:39:32.120169 | orchestrator | Monday 19 May 2025 14:38:57 +0000 (0:00:06.711) 0:01:42.038 ************ 2025-05-19 14:39:32.120179 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:39:32.120190 | orchestrator | 2025-05-19 14:39:32.120200 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 14:39:32.120211 | orchestrator | 2025-05-19 14:39:32.120222 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 14:39:32.120232 | orchestrator | Monday 19 May 2025 14:39:08 +0000 (0:00:10.771) 0:01:52.809 ************ 2025-05-19 14:39:32.120243 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:39:32.120253 | orchestrator | 2025-05-19 14:39:32.120269 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 14:39:32.120281 | orchestrator | Monday 19 May 2025 14:39:09 +0000 (0:00:00.614) 0:01:53.424 ************ 2025-05-19 14:39:32.120291 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:39:32.120302 | orchestrator | 2025-05-19 14:39:32.120330 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 14:39:32.120347 | orchestrator | Monday 19 May 2025 14:39:09 +0000 (0:00:00.234) 0:01:53.658 ************ 2025-05-19 14:39:32.120358 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:39:32.120369 | orchestrator | 2025-05-19 14:39:32.120379 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 14:39:32.120390 | orchestrator | Monday 19 May 2025 14:39:11 +0000 (0:00:01.553) 0:01:55.211 ************ 2025-05-19 14:39:32.120401 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:39:32.120412 | orchestrator | 2025-05-19 14:39:32.120422 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-19 14:39:32.120433 | orchestrator | 2025-05-19 14:39:32.120444 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-19 14:39:32.120455 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:14.398) 0:02:09.610 ************ 2025-05-19 14:39:32.120465 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:39:32.120476 | orchestrator | 2025-05-19 14:39:32.120487 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-19 14:39:32.120497 | orchestrator | Monday 19 May 2025 14:39:27 +0000 (0:00:01.527) 0:02:11.138 ************ 2025-05-19 14:39:32.120508 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 14:39:32.120519 | orchestrator | enable_outward_rabbitmq_True 2025-05-19 14:39:32.120530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 14:39:32.120541 | orchestrator | outward_rabbitmq_restart 2025-05-19 14:39:32.120551 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:39:32.120562 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:39:32.120573 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:39:32.120584 | orchestrator | 2025-05-19 14:39:32.120595 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-19 14:39:32.120605 | orchestrator | skipping: no hosts matched 2025-05-19 14:39:32.120616 | orchestrator | 2025-05-19 14:39:32.120627 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-19 14:39:32.120638 | orchestrator | skipping: no hosts matched 2025-05-19 14:39:32.120648 | orchestrator | 2025-05-19 14:39:32.120659 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-19 14:39:32.120677 | orchestrator | skipping: no hosts matched 2025-05-19 14:39:32.120687 | orchestrator | 2025-05-19 14:39:32.120698 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:39:32.120709 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 14:39:32.120720 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 14:39:32.120731 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:39:32.120742 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:39:32.120753 | orchestrator | 2025-05-19 14:39:32.120764 | orchestrator | 2025-05-19 14:39:32.120775 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:39:32.120786 | orchestrator | Monday 19 May 2025 14:39:29 +0000 (0:00:02.448) 0:02:13.586 ************ 2025-05-19 14:39:32.120796 | orchestrator | =============================================================================== 2025-05-19 14:39:32.120807 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.75s 2025-05-19 14:39:32.120818 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.96s 2025-05-19 14:39:32.120828 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.40s 2025-05-19 14:39:32.120839 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.85s 2025-05-19 14:39:32.120850 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.80s 2025-05-19 14:39:32.120860 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.49s 2025-05-19 14:39:32.120871 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.45s 2025-05-19 14:39:32.120886 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.19s 2025-05-19 14:39:32.120897 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.06s 2025-05-19 14:39:32.120908 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.76s 2025-05-19 14:39:32.120919 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2025-05-19 14:39:32.120929 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.53s 2025-05-19 14:39:32.120939 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.53s 2025-05-19 14:39:32.120950 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.53s 2025-05-19 14:39:32.120961 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.41s 2025-05-19 14:39:32.120971 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.40s 2025-05-19 14:39:32.120982 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.20s 2025-05-19 14:39:32.120999 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.90s 2025-05-19 14:39:32.121010 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-05-19 14:39:32.121020 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.81s 2025-05-19 14:39:32.121031 | orchestrator | 2025-05-19 14:39:32 | INFO  | Task f574db88-944c-4750-8db9-e34691b439de is in state SUCCESS 2025-05-19 14:39:32.121042 | orchestrator | 2025-05-19 14:39:32 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:32.121052 | orchestrator | 2025-05-19 14:39:32 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:32.121063 | orchestrator | 2025-05-19 14:39:32 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:32.121080 | orchestrator | 2025-05-19 14:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:35.144689 | orchestrator | 2025-05-19 14:39:35 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:35.144754 | orchestrator | 2025-05-19 14:39:35 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:35.145482 | orchestrator | 2025-05-19 14:39:35 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:35.145531 | orchestrator | 2025-05-19 14:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:38.178485 | orchestrator | 2025-05-19 14:39:38 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:38.180392 | orchestrator | 2025-05-19 14:39:38 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:38.182578 | orchestrator | 2025-05-19 14:39:38 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:38.182631 | orchestrator | 2025-05-19 14:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:41.230691 | orchestrator | 2025-05-19 14:39:41 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:41.230801 | orchestrator | 2025-05-19 14:39:41 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:41.230816 | orchestrator | 2025-05-19 14:39:41 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:41.230828 | orchestrator | 2025-05-19 14:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:44.268951 | orchestrator | 2025-05-19 14:39:44 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:44.269496 | orchestrator | 2025-05-19 14:39:44 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:44.270308 | orchestrator | 2025-05-19 14:39:44 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:44.270328 | orchestrator | 2025-05-19 14:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:47.318010 | orchestrator | 2025-05-19 14:39:47 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:47.318354 | orchestrator | 2025-05-19 14:39:47 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:47.320162 | orchestrator | 2025-05-19 14:39:47 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:47.320200 | orchestrator | 2025-05-19 14:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:50.377293 | orchestrator | 2025-05-19 14:39:50 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:50.377447 | orchestrator | 2025-05-19 14:39:50 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:50.377687 | orchestrator | 2025-05-19 14:39:50 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:50.377702 | orchestrator | 2025-05-19 14:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:53.421974 | orchestrator | 2025-05-19 14:39:53 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:53.422285 | orchestrator | 2025-05-19 14:39:53 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:53.424109 | orchestrator | 2025-05-19 14:39:53 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:53.424142 | orchestrator | 2025-05-19 14:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:56.470666 | orchestrator | 2025-05-19 14:39:56 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:56.471222 | orchestrator | 2025-05-19 14:39:56 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:56.476611 | orchestrator | 2025-05-19 14:39:56 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:56.476637 | orchestrator | 2025-05-19 14:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:39:59.519732 | orchestrator | 2025-05-19 14:39:59 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:39:59.519912 | orchestrator | 2025-05-19 14:39:59 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:39:59.520870 | orchestrator | 2025-05-19 14:39:59 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:39:59.520894 | orchestrator | 2025-05-19 14:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:02.559984 | orchestrator | 2025-05-19 14:40:02 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:02.560520 | orchestrator | 2025-05-19 14:40:02 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:02.561752 | orchestrator | 2025-05-19 14:40:02 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:02.561776 | orchestrator | 2025-05-19 14:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:05.606765 | orchestrator | 2025-05-19 14:40:05 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:05.606884 | orchestrator | 2025-05-19 14:40:05 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:05.606894 | orchestrator | 2025-05-19 14:40:05 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:05.606944 | orchestrator | 2025-05-19 14:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:08.662067 | orchestrator | 2025-05-19 14:40:08 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:08.663541 | orchestrator | 2025-05-19 14:40:08 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:08.665681 | orchestrator | 2025-05-19 14:40:08 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:08.665733 | orchestrator | 2025-05-19 14:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:11.717076 | orchestrator | 2025-05-19 14:40:11 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:11.718350 | orchestrator | 2025-05-19 14:40:11 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:11.719686 | orchestrator | 2025-05-19 14:40:11 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:11.719984 | orchestrator | 2025-05-19 14:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:14.762362 | orchestrator | 2025-05-19 14:40:14 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:14.765023 | orchestrator | 2025-05-19 14:40:14 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:14.765892 | orchestrator | 2025-05-19 14:40:14 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:14.766292 | orchestrator | 2025-05-19 14:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:17.830307 | orchestrator | 2025-05-19 14:40:17 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:17.832702 | orchestrator | 2025-05-19 14:40:17 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:17.834319 | orchestrator | 2025-05-19 14:40:17 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:17.835473 | orchestrator | 2025-05-19 14:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:20.954327 | orchestrator | 2025-05-19 14:40:20 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:20.955670 | orchestrator | 2025-05-19 14:40:20 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:20.957206 | orchestrator | 2025-05-19 14:40:20 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:20.957218 | orchestrator | 2025-05-19 14:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:23.987927 | orchestrator | 2025-05-19 14:40:23 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:23.988140 | orchestrator | 2025-05-19 14:40:23 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:23.989025 | orchestrator | 2025-05-19 14:40:23 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:23.989190 | orchestrator | 2025-05-19 14:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:27.032522 | orchestrator | 2025-05-19 14:40:27 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:27.032644 | orchestrator | 2025-05-19 14:40:27 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state STARTED 2025-05-19 14:40:27.032653 | orchestrator | 2025-05-19 14:40:27 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:27.033345 | orchestrator | 2025-05-19 14:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:30.084197 | orchestrator | 2025-05-19 14:40:30 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:30.085128 | orchestrator | 2025-05-19 14:40:30 | INFO  | Task 3c7f2221-c2b6-4172-bc3f-e578dd95a7d2 is in state SUCCESS 2025-05-19 14:40:30.085271 | orchestrator | 2025-05-19 14:40:30.088066 | orchestrator | 2025-05-19 14:40:30.088114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:40:30.088128 | orchestrator | 2025-05-19 14:40:30.088140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:40:30.088152 | orchestrator | Monday 19 May 2025 14:38:03 +0000 (0:00:00.169) 0:00:00.169 ************ 2025-05-19 14:40:30.088163 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.088175 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.088186 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.088197 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:40:30.088208 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:40:30.088219 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:40:30.088230 | orchestrator | 2025-05-19 14:40:30.088241 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:40:30.088252 | orchestrator | Monday 19 May 2025 14:38:03 +0000 (0:00:00.778) 0:00:00.948 ************ 2025-05-19 14:40:30.088264 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-19 14:40:30.088275 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-19 14:40:30.088286 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-19 14:40:30.088297 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-19 14:40:30.088308 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-19 14:40:30.088319 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-19 14:40:30.088361 | orchestrator | 2025-05-19 14:40:30.088381 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-19 14:40:30.088398 | orchestrator | 2025-05-19 14:40:30.088409 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-19 14:40:30.088420 | orchestrator | Monday 19 May 2025 14:38:05 +0000 (0:00:01.332) 0:00:02.281 ************ 2025-05-19 14:40:30.088432 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:40:30.088444 | orchestrator | 2025-05-19 14:40:30.088455 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-19 14:40:30.088466 | orchestrator | Monday 19 May 2025 14:38:06 +0000 (0:00:01.042) 0:00:03.323 ************ 2025-05-19 14:40:30.088478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088628 | orchestrator | 2025-05-19 14:40:30.088680 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-19 14:40:30.088694 | orchestrator | Monday 19 May 2025 14:38:07 +0000 (0:00:01.160) 0:00:04.483 ************ 2025-05-19 14:40:30.088708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088800 | orchestrator | 2025-05-19 14:40:30.088813 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-19 14:40:30.088825 | orchestrator | Monday 19 May 2025 14:38:08 +0000 (0:00:01.295) 0:00:05.779 ************ 2025-05-19 14:40:30.088838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088930 | orchestrator | 2025-05-19 14:40:30.088942 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-19 14:40:30.088955 | orchestrator | Monday 19 May 2025 14:38:09 +0000 (0:00:00.949) 0:00:06.729 ************ 2025-05-19 14:40:30.088968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.088982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089116 | orchestrator | 2025-05-19 14:40:30.089135 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-19 14:40:30.089146 | orchestrator | Monday 19 May 2025 14:38:11 +0000 (0:00:01.333) 0:00:08.062 ************ 2025-05-19 14:40:30.089157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.089229 | orchestrator | 2025-05-19 14:40:30.089240 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-19 14:40:30.089251 | orchestrator | Monday 19 May 2025 14:38:12 +0000 (0:00:01.311) 0:00:09.373 ************ 2025-05-19 14:40:30.089262 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.089273 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.089284 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.089295 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:40:30.089306 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:40:30.089316 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:40:30.089327 | orchestrator | 2025-05-19 14:40:30.089337 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-19 14:40:30.089348 | orchestrator | Monday 19 May 2025 14:38:15 +0000 (0:00:03.171) 0:00:12.545 ************ 2025-05-19 14:40:30.089366 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-19 14:40:30.089378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-19 14:40:30.089389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-19 14:40:30.089400 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-19 14:40:30.089410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-19 14:40:30.089421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-19 14:40:30.089432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089443 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 14:40:30.089503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089516 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089537 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 14:40:30.089589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089621 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089641 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089677 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 14:40:30.089695 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089714 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089757 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 14:40:30.089787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089820 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089841 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 14:40:30.089852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 14:40:30.089863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 14:40:30.089874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 14:40:30.089885 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 14:40:30.089896 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 14:40:30.089906 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 14:40:30.089917 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-19 14:40:30.089929 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-19 14:40:30.089947 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-19 14:40:30.089958 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-19 14:40:30.089969 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-19 14:40:30.089980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-19 14:40:30.089991 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 14:40:30.090002 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 14:40:30.090080 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 14:40:30.090095 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 14:40:30.090106 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 14:40:30.090117 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 14:40:30.090128 | orchestrator | 2025-05-19 14:40:30.090138 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090149 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:17.440) 0:00:29.985 ************ 2025-05-19 14:40:30.090160 | orchestrator | 2025-05-19 14:40:30.090171 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090200 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.069) 0:00:30.055 ************ 2025-05-19 14:40:30.090217 | orchestrator | 2025-05-19 14:40:30.090235 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090252 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.091) 0:00:30.146 ************ 2025-05-19 14:40:30.090270 | orchestrator | 2025-05-19 14:40:30.090289 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090309 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.138) 0:00:30.285 ************ 2025-05-19 14:40:30.090330 | orchestrator | 2025-05-19 14:40:30.090350 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090369 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.069) 0:00:30.354 ************ 2025-05-19 14:40:30.090390 | orchestrator | 2025-05-19 14:40:30.090412 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 14:40:30.090433 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.049) 0:00:30.404 ************ 2025-05-19 14:40:30.090451 | orchestrator | 2025-05-19 14:40:30.090480 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-19 14:40:30.090500 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:00.054) 0:00:30.458 ************ 2025-05-19 14:40:30.090519 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:40:30.090540 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.090590 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:40:30.090609 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:40:30.090629 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.090648 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.090668 | orchestrator | 2025-05-19 14:40:30.090687 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-19 14:40:30.090706 | orchestrator | Monday 19 May 2025 14:38:35 +0000 (0:00:01.686) 0:00:32.145 ************ 2025-05-19 14:40:30.090724 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.090742 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:40:30.090762 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.090780 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:40:30.090798 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:40:30.090814 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.090825 | orchestrator | 2025-05-19 14:40:30.090835 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-19 14:40:30.090846 | orchestrator | 2025-05-19 14:40:30.090857 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 14:40:30.090868 | orchestrator | Monday 19 May 2025 14:39:12 +0000 (0:00:37.687) 0:01:09.832 ************ 2025-05-19 14:40:30.090878 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:40:30.090889 | orchestrator | 2025-05-19 14:40:30.090900 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 14:40:30.090910 | orchestrator | Monday 19 May 2025 14:39:13 +0000 (0:00:00.473) 0:01:10.306 ************ 2025-05-19 14:40:30.090921 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:40:30.090932 | orchestrator | 2025-05-19 14:40:30.090942 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-19 14:40:30.090953 | orchestrator | Monday 19 May 2025 14:39:13 +0000 (0:00:00.629) 0:01:10.935 ************ 2025-05-19 14:40:30.090963 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.090974 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.090985 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.090995 | orchestrator | 2025-05-19 14:40:30.091006 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-19 14:40:30.091016 | orchestrator | Monday 19 May 2025 14:39:14 +0000 (0:00:00.743) 0:01:11.679 ************ 2025-05-19 14:40:30.091027 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.091037 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.091060 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.091071 | orchestrator | 2025-05-19 14:40:30.091092 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-19 14:40:30.091103 | orchestrator | Monday 19 May 2025 14:39:15 +0000 (0:00:00.300) 0:01:11.979 ************ 2025-05-19 14:40:30.091114 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.091125 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.091135 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.091146 | orchestrator | 2025-05-19 14:40:30.091157 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-19 14:40:30.091167 | orchestrator | Monday 19 May 2025 14:39:15 +0000 (0:00:00.291) 0:01:12.272 ************ 2025-05-19 14:40:30.091178 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.091188 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.091199 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.091209 | orchestrator | 2025-05-19 14:40:30.091220 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-19 14:40:30.091231 | orchestrator | Monday 19 May 2025 14:39:15 +0000 (0:00:00.477) 0:01:12.749 ************ 2025-05-19 14:40:30.091242 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.091252 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.091263 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.091274 | orchestrator | 2025-05-19 14:40:30.091287 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-19 14:40:30.091305 | orchestrator | Monday 19 May 2025 14:39:16 +0000 (0:00:00.297) 0:01:13.047 ************ 2025-05-19 14:40:30.091330 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091354 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091370 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091386 | orchestrator | 2025-05-19 14:40:30.091403 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-19 14:40:30.091420 | orchestrator | Monday 19 May 2025 14:39:16 +0000 (0:00:00.261) 0:01:13.309 ************ 2025-05-19 14:40:30.091448 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091464 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091479 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091495 | orchestrator | 2025-05-19 14:40:30.091511 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-19 14:40:30.091528 | orchestrator | Monday 19 May 2025 14:39:16 +0000 (0:00:00.261) 0:01:13.571 ************ 2025-05-19 14:40:30.091543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091587 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091604 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091621 | orchestrator | 2025-05-19 14:40:30.091637 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-19 14:40:30.091654 | orchestrator | Monday 19 May 2025 14:39:17 +0000 (0:00:00.499) 0:01:14.070 ************ 2025-05-19 14:40:30.091671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091688 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091705 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091723 | orchestrator | 2025-05-19 14:40:30.091741 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-19 14:40:30.091758 | orchestrator | Monday 19 May 2025 14:39:17 +0000 (0:00:00.315) 0:01:14.386 ************ 2025-05-19 14:40:30.091777 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091795 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091812 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091830 | orchestrator | 2025-05-19 14:40:30.091848 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-19 14:40:30.091878 | orchestrator | Monday 19 May 2025 14:39:17 +0000 (0:00:00.278) 0:01:14.664 ************ 2025-05-19 14:40:30.091897 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.091915 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.091933 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.091951 | orchestrator | 2025-05-19 14:40:30.091983 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-19 14:40:30.092003 | orchestrator | Monday 19 May 2025 14:39:18 +0000 (0:00:00.334) 0:01:14.998 ************ 2025-05-19 14:40:30.092022 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092041 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092060 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092079 | orchestrator | 2025-05-19 14:40:30.092098 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-19 14:40:30.092117 | orchestrator | Monday 19 May 2025 14:39:18 +0000 (0:00:00.486) 0:01:15.485 ************ 2025-05-19 14:40:30.092135 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092154 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092173 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092190 | orchestrator | 2025-05-19 14:40:30.092207 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-19 14:40:30.092225 | orchestrator | Monday 19 May 2025 14:39:18 +0000 (0:00:00.368) 0:01:15.853 ************ 2025-05-19 14:40:30.092242 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092260 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092278 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092295 | orchestrator | 2025-05-19 14:40:30.092312 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-19 14:40:30.092329 | orchestrator | Monday 19 May 2025 14:39:19 +0000 (0:00:00.298) 0:01:16.152 ************ 2025-05-19 14:40:30.092346 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092364 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092382 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092400 | orchestrator | 2025-05-19 14:40:30.092418 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-19 14:40:30.092433 | orchestrator | Monday 19 May 2025 14:39:19 +0000 (0:00:00.264) 0:01:16.417 ************ 2025-05-19 14:40:30.092450 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092466 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092483 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092501 | orchestrator | 2025-05-19 14:40:30.092518 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-19 14:40:30.092536 | orchestrator | Monday 19 May 2025 14:39:19 +0000 (0:00:00.426) 0:01:16.844 ************ 2025-05-19 14:40:30.092583 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092605 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092645 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092664 | orchestrator | 2025-05-19 14:40:30.092684 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 14:40:30.092703 | orchestrator | Monday 19 May 2025 14:39:20 +0000 (0:00:00.275) 0:01:17.119 ************ 2025-05-19 14:40:30.092721 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:40:30.092742 | orchestrator | 2025-05-19 14:40:30.092763 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-19 14:40:30.092783 | orchestrator | Monday 19 May 2025 14:39:20 +0000 (0:00:00.542) 0:01:17.661 ************ 2025-05-19 14:40:30.092800 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.092812 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.092823 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.092834 | orchestrator | 2025-05-19 14:40:30.092845 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-19 14:40:30.092856 | orchestrator | Monday 19 May 2025 14:39:21 +0000 (0:00:00.826) 0:01:18.488 ************ 2025-05-19 14:40:30.092866 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.092877 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.092888 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.092898 | orchestrator | 2025-05-19 14:40:30.092909 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-19 14:40:30.092932 | orchestrator | Monday 19 May 2025 14:39:21 +0000 (0:00:00.411) 0:01:18.899 ************ 2025-05-19 14:40:30.092943 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.092953 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.092969 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.092986 | orchestrator | 2025-05-19 14:40:30.093006 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-19 14:40:30.093024 | orchestrator | Monday 19 May 2025 14:39:22 +0000 (0:00:00.361) 0:01:19.261 ************ 2025-05-19 14:40:30.093043 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.093054 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.093064 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.093075 | orchestrator | 2025-05-19 14:40:30.093085 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-19 14:40:30.093096 | orchestrator | Monday 19 May 2025 14:39:22 +0000 (0:00:00.331) 0:01:19.593 ************ 2025-05-19 14:40:30.093106 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.093117 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.093128 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.093138 | orchestrator | 2025-05-19 14:40:30.093149 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-19 14:40:30.093159 | orchestrator | Monday 19 May 2025 14:39:23 +0000 (0:00:00.483) 0:01:20.076 ************ 2025-05-19 14:40:30.093170 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.093180 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.093191 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.093202 | orchestrator | 2025-05-19 14:40:30.093212 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-19 14:40:30.093232 | orchestrator | Monday 19 May 2025 14:39:23 +0000 (0:00:00.286) 0:01:20.362 ************ 2025-05-19 14:40:30.093250 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.093268 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.093288 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.093307 | orchestrator | 2025-05-19 14:40:30.093327 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-19 14:40:30.093338 | orchestrator | Monday 19 May 2025 14:39:23 +0000 (0:00:00.296) 0:01:20.658 ************ 2025-05-19 14:40:30.093349 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.093366 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.093384 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.093406 | orchestrator | 2025-05-19 14:40:30.093430 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 14:40:30.093447 | orchestrator | Monday 19 May 2025 14:39:23 +0000 (0:00:00.306) 0:01:20.965 ************ 2025-05-19 14:40:30.093465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093701 | orchestrator | 2025-05-19 14:40:30.093718 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 14:40:30.093736 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:01.771) 0:01:22.737 ************ 2025-05-19 14:40:30.093771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.093966 | orchestrator | 2025-05-19 14:40:30.093985 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 14:40:30.094004 | orchestrator | Monday 19 May 2025 14:39:30 +0000 (0:00:04.410) 0:01:27.147 ************ 2025-05-19 14:40:30.094076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.094242 | orchestrator | 2025-05-19 14:40:30.094253 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.094263 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:02.042) 0:01:29.189 ************ 2025-05-19 14:40:30.094274 | orchestrator | 2025-05-19 14:40:30.094286 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.094297 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.049) 0:01:29.239 ************ 2025-05-19 14:40:30.094307 | orchestrator | 2025-05-19 14:40:30.094318 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.094329 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.051) 0:01:29.290 ************ 2025-05-19 14:40:30.094339 | orchestrator | 2025-05-19 14:40:30.094350 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 14:40:30.094361 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.050) 0:01:29.341 ************ 2025-05-19 14:40:30.094372 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.094382 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.094393 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.094404 | orchestrator | 2025-05-19 14:40:30.094414 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 14:40:30.094425 | orchestrator | Monday 19 May 2025 14:39:34 +0000 (0:00:02.441) 0:01:31.782 ************ 2025-05-19 14:40:30.094436 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.094451 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.094462 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.094473 | orchestrator | 2025-05-19 14:40:30.094489 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 14:40:30.094500 | orchestrator | Monday 19 May 2025 14:39:42 +0000 (0:00:07.550) 0:01:39.333 ************ 2025-05-19 14:40:30.094511 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.094521 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.094532 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.094543 | orchestrator | 2025-05-19 14:40:30.094588 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 14:40:30.094600 | orchestrator | Monday 19 May 2025 14:39:48 +0000 (0:00:06.614) 0:01:45.948 ************ 2025-05-19 14:40:30.094610 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.094621 | orchestrator | 2025-05-19 14:40:30.094632 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 14:40:30.094642 | orchestrator | Monday 19 May 2025 14:39:49 +0000 (0:00:00.139) 0:01:46.088 ************ 2025-05-19 14:40:30.094653 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.094664 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.094674 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.094685 | orchestrator | 2025-05-19 14:40:30.094696 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 14:40:30.094706 | orchestrator | Monday 19 May 2025 14:39:49 +0000 (0:00:00.717) 0:01:46.805 ************ 2025-05-19 14:40:30.094717 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.094727 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.094738 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.094749 | orchestrator | 2025-05-19 14:40:30.094760 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 14:40:30.094770 | orchestrator | Monday 19 May 2025 14:39:50 +0000 (0:00:01.117) 0:01:47.922 ************ 2025-05-19 14:40:30.094781 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.094792 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.094802 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.094813 | orchestrator | 2025-05-19 14:40:30.094824 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 14:40:30.094834 | orchestrator | Monday 19 May 2025 14:39:51 +0000 (0:00:00.759) 0:01:48.682 ************ 2025-05-19 14:40:30.094845 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.094856 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.094867 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.094877 | orchestrator | 2025-05-19 14:40:30.094888 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 14:40:30.094899 | orchestrator | Monday 19 May 2025 14:39:52 +0000 (0:00:00.616) 0:01:49.298 ************ 2025-05-19 14:40:30.094910 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.094921 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.094938 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.094949 | orchestrator | 2025-05-19 14:40:30.094960 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 14:40:30.094971 | orchestrator | Monday 19 May 2025 14:39:53 +0000 (0:00:00.762) 0:01:50.060 ************ 2025-05-19 14:40:30.094982 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.094992 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.095003 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.095013 | orchestrator | 2025-05-19 14:40:30.095024 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-19 14:40:30.095035 | orchestrator | Monday 19 May 2025 14:39:54 +0000 (0:00:01.068) 0:01:51.129 ************ 2025-05-19 14:40:30.095046 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.095057 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.095067 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.095078 | orchestrator | 2025-05-19 14:40:30.095089 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 14:40:30.095099 | orchestrator | Monday 19 May 2025 14:39:54 +0000 (0:00:00.281) 0:01:51.411 ************ 2025-05-19 14:40:30.095111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095129 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095169 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095181 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095204 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095238 | orchestrator | 2025-05-19 14:40:30.095249 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 14:40:30.095260 | orchestrator | Monday 19 May 2025 14:39:55 +0000 (0:00:01.323) 0:01:52.735 ************ 2025-05-19 14:40:30.095271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095290 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095302 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095386 | orchestrator | 2025-05-19 14:40:30.095398 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 14:40:30.095417 | orchestrator | Monday 19 May 2025 14:40:00 +0000 (0:00:04.291) 0:01:57.027 ************ 2025-05-19 14:40:30.095446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095478 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095496 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095592 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:40:30.095603 | orchestrator | 2025-05-19 14:40:30.095614 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.095625 | orchestrator | Monday 19 May 2025 14:40:03 +0000 (0:00:02.974) 0:02:00.001 ************ 2025-05-19 14:40:30.095636 | orchestrator | 2025-05-19 14:40:30.095646 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.095657 | orchestrator | Monday 19 May 2025 14:40:03 +0000 (0:00:00.073) 0:02:00.074 ************ 2025-05-19 14:40:30.095667 | orchestrator | 2025-05-19 14:40:30.095678 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 14:40:30.095695 | orchestrator | Monday 19 May 2025 14:40:03 +0000 (0:00:00.068) 0:02:00.143 ************ 2025-05-19 14:40:30.095706 | orchestrator | 2025-05-19 14:40:30.095717 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 14:40:30.095727 | orchestrator | Monday 19 May 2025 14:40:03 +0000 (0:00:00.078) 0:02:00.222 ************ 2025-05-19 14:40:30.095738 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.095749 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.095760 | orchestrator | 2025-05-19 14:40:30.095776 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 14:40:30.095788 | orchestrator | Monday 19 May 2025 14:40:09 +0000 (0:00:06.206) 0:02:06.429 ************ 2025-05-19 14:40:30.095798 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.095809 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.095820 | orchestrator | 2025-05-19 14:40:30.095830 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 14:40:30.095841 | orchestrator | Monday 19 May 2025 14:40:15 +0000 (0:00:06.135) 0:02:12.565 ************ 2025-05-19 14:40:30.095852 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:40:30.095862 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:40:30.095873 | orchestrator | 2025-05-19 14:40:30.095884 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 14:40:30.095894 | orchestrator | Monday 19 May 2025 14:40:21 +0000 (0:00:06.145) 0:02:18.710 ************ 2025-05-19 14:40:30.095905 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:40:30.095916 | orchestrator | 2025-05-19 14:40:30.095926 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 14:40:30.095937 | orchestrator | Monday 19 May 2025 14:40:21 +0000 (0:00:00.122) 0:02:18.832 ************ 2025-05-19 14:40:30.095948 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.095958 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.095969 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.095980 | orchestrator | 2025-05-19 14:40:30.095990 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 14:40:30.096001 | orchestrator | Monday 19 May 2025 14:40:22 +0000 (0:00:01.027) 0:02:19.860 ************ 2025-05-19 14:40:30.096012 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.096022 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.096033 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.096043 | orchestrator | 2025-05-19 14:40:30.096054 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 14:40:30.096065 | orchestrator | Monday 19 May 2025 14:40:23 +0000 (0:00:00.616) 0:02:20.476 ************ 2025-05-19 14:40:30.096076 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.096086 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.096097 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.096107 | orchestrator | 2025-05-19 14:40:30.096118 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 14:40:30.096129 | orchestrator | Monday 19 May 2025 14:40:24 +0000 (0:00:00.746) 0:02:21.223 ************ 2025-05-19 14:40:30.096139 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:40:30.096150 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:40:30.096160 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:40:30.096171 | orchestrator | 2025-05-19 14:40:30.096181 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 14:40:30.096192 | orchestrator | Monday 19 May 2025 14:40:24 +0000 (0:00:00.596) 0:02:21.819 ************ 2025-05-19 14:40:30.096203 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.096213 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.096224 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.096234 | orchestrator | 2025-05-19 14:40:30.096245 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 14:40:30.096256 | orchestrator | Monday 19 May 2025 14:40:25 +0000 (0:00:01.049) 0:02:22.869 ************ 2025-05-19 14:40:30.096273 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:40:30.096283 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:40:30.096294 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:40:30.096304 | orchestrator | 2025-05-19 14:40:30.096320 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:40:30.096332 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 14:40:30.096343 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 14:40:30.096354 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 14:40:30.096365 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:40:30.096376 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:40:30.096387 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:40:30.096397 | orchestrator | 2025-05-19 14:40:30.096408 | orchestrator | 2025-05-19 14:40:30.096419 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:40:30.096429 | orchestrator | Monday 19 May 2025 14:40:26 +0000 (0:00:00.919) 0:02:23.788 ************ 2025-05-19 14:40:30.096440 | orchestrator | =============================================================================== 2025-05-19 14:40:30.096451 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.69s 2025-05-19 14:40:30.096461 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.44s 2025-05-19 14:40:30.096472 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.69s 2025-05-19 14:40:30.096483 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.76s 2025-05-19 14:40:30.096493 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.65s 2025-05-19 14:40:30.096504 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.41s 2025-05-19 14:40:30.096515 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.29s 2025-05-19 14:40:30.096530 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.17s 2025-05-19 14:40:30.096542 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.97s 2025-05-19 14:40:30.096578 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.04s 2025-05-19 14:40:30.096598 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2025-05-19 14:40:30.096617 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.69s 2025-05-19 14:40:30.096635 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.33s 2025-05-19 14:40:30.096649 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.33s 2025-05-19 14:40:30.096660 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.32s 2025-05-19 14:40:30.096670 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.31s 2025-05-19 14:40:30.096681 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.30s 2025-05-19 14:40:30.096691 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.16s 2025-05-19 14:40:30.096702 | orchestrator | ovn-db : Configure OVN NB connection settings --------------------------- 1.12s 2025-05-19 14:40:30.096713 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.07s 2025-05-19 14:40:30.096724 | orchestrator | 2025-05-19 14:40:30 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:30.096746 | orchestrator | 2025-05-19 14:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:33.144058 | orchestrator | 2025-05-19 14:40:33 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:33.145750 | orchestrator | 2025-05-19 14:40:33 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:33.145789 | orchestrator | 2025-05-19 14:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:36.184464 | orchestrator | 2025-05-19 14:40:36 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:36.186927 | orchestrator | 2025-05-19 14:40:36 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:36.187384 | orchestrator | 2025-05-19 14:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:39.241497 | orchestrator | 2025-05-19 14:40:39 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:39.242932 | orchestrator | 2025-05-19 14:40:39 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:39.242967 | orchestrator | 2025-05-19 14:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:42.284329 | orchestrator | 2025-05-19 14:40:42 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:42.284437 | orchestrator | 2025-05-19 14:40:42 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:42.284451 | orchestrator | 2025-05-19 14:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:45.333369 | orchestrator | 2025-05-19 14:40:45 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:45.334237 | orchestrator | 2025-05-19 14:40:45 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:45.334492 | orchestrator | 2025-05-19 14:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:48.383046 | orchestrator | 2025-05-19 14:40:48 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:48.384827 | orchestrator | 2025-05-19 14:40:48 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:48.384878 | orchestrator | 2025-05-19 14:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:51.428931 | orchestrator | 2025-05-19 14:40:51 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:51.429040 | orchestrator | 2025-05-19 14:40:51 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:51.429055 | orchestrator | 2025-05-19 14:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:54.471609 | orchestrator | 2025-05-19 14:40:54 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:54.475106 | orchestrator | 2025-05-19 14:40:54 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:54.475155 | orchestrator | 2025-05-19 14:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:40:57.521837 | orchestrator | 2025-05-19 14:40:57 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:40:57.522292 | orchestrator | 2025-05-19 14:40:57 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:40:57.524629 | orchestrator | 2025-05-19 14:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:00.566004 | orchestrator | 2025-05-19 14:41:00 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:00.566799 | orchestrator | 2025-05-19 14:41:00 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:00.566835 | orchestrator | 2025-05-19 14:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:03.625793 | orchestrator | 2025-05-19 14:41:03 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:03.627662 | orchestrator | 2025-05-19 14:41:03 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:03.627735 | orchestrator | 2025-05-19 14:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:06.668275 | orchestrator | 2025-05-19 14:41:06 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:06.668396 | orchestrator | 2025-05-19 14:41:06 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:06.668416 | orchestrator | 2025-05-19 14:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:09.711912 | orchestrator | 2025-05-19 14:41:09 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:09.712013 | orchestrator | 2025-05-19 14:41:09 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:09.712030 | orchestrator | 2025-05-19 14:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:12.761743 | orchestrator | 2025-05-19 14:41:12 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:12.763408 | orchestrator | 2025-05-19 14:41:12 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:12.763461 | orchestrator | 2025-05-19 14:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:15.819173 | orchestrator | 2025-05-19 14:41:15 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:15.819315 | orchestrator | 2025-05-19 14:41:15 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:15.819333 | orchestrator | 2025-05-19 14:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:18.858331 | orchestrator | 2025-05-19 14:41:18 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:18.859356 | orchestrator | 2025-05-19 14:41:18 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:18.862177 | orchestrator | 2025-05-19 14:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:21.905021 | orchestrator | 2025-05-19 14:41:21 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:21.906991 | orchestrator | 2025-05-19 14:41:21 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:21.907381 | orchestrator | 2025-05-19 14:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:24.956196 | orchestrator | 2025-05-19 14:41:24 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:24.958284 | orchestrator | 2025-05-19 14:41:24 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:24.958848 | orchestrator | 2025-05-19 14:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:28.011109 | orchestrator | 2025-05-19 14:41:28 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:28.016697 | orchestrator | 2025-05-19 14:41:28 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:28.016791 | orchestrator | 2025-05-19 14:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:31.073084 | orchestrator | 2025-05-19 14:41:31 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:31.073197 | orchestrator | 2025-05-19 14:41:31 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:31.073213 | orchestrator | 2025-05-19 14:41:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:34.117716 | orchestrator | 2025-05-19 14:41:34 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:34.119570 | orchestrator | 2025-05-19 14:41:34 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:34.119603 | orchestrator | 2025-05-19 14:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:37.176102 | orchestrator | 2025-05-19 14:41:37 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:37.179266 | orchestrator | 2025-05-19 14:41:37 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:37.179298 | orchestrator | 2025-05-19 14:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:40.237166 | orchestrator | 2025-05-19 14:41:40 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:40.240129 | orchestrator | 2025-05-19 14:41:40 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:40.240165 | orchestrator | 2025-05-19 14:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:43.298713 | orchestrator | 2025-05-19 14:41:43 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:43.300962 | orchestrator | 2025-05-19 14:41:43 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:43.302558 | orchestrator | 2025-05-19 14:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:46.356500 | orchestrator | 2025-05-19 14:41:46 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:46.358184 | orchestrator | 2025-05-19 14:41:46 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:46.358605 | orchestrator | 2025-05-19 14:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:49.403776 | orchestrator | 2025-05-19 14:41:49 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:49.403929 | orchestrator | 2025-05-19 14:41:49 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:49.403946 | orchestrator | 2025-05-19 14:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:52.447511 | orchestrator | 2025-05-19 14:41:52 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:52.447793 | orchestrator | 2025-05-19 14:41:52 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state STARTED 2025-05-19 14:41:52.448949 | orchestrator | 2025-05-19 14:41:52 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:52.448980 | orchestrator | 2025-05-19 14:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:55.501285 | orchestrator | 2025-05-19 14:41:55 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:55.501556 | orchestrator | 2025-05-19 14:41:55 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state STARTED 2025-05-19 14:41:55.502622 | orchestrator | 2025-05-19 14:41:55 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:55.502655 | orchestrator | 2025-05-19 14:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:41:58.546183 | orchestrator | 2025-05-19 14:41:58 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:41:58.550340 | orchestrator | 2025-05-19 14:41:58 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state STARTED 2025-05-19 14:41:58.551365 | orchestrator | 2025-05-19 14:41:58 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:41:58.551549 | orchestrator | 2025-05-19 14:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:01.583634 | orchestrator | 2025-05-19 14:42:01 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:01.587315 | orchestrator | 2025-05-19 14:42:01 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state STARTED 2025-05-19 14:42:01.587685 | orchestrator | 2025-05-19 14:42:01 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:01.587710 | orchestrator | 2025-05-19 14:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:04.634782 | orchestrator | 2025-05-19 14:42:04 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:04.635757 | orchestrator | 2025-05-19 14:42:04 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state STARTED 2025-05-19 14:42:04.638521 | orchestrator | 2025-05-19 14:42:04 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:04.638569 | orchestrator | 2025-05-19 14:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:07.685009 | orchestrator | 2025-05-19 14:42:07 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:07.687307 | orchestrator | 2025-05-19 14:42:07 | INFO  | Task 86a10581-6ffa-4b28-9e1c-6865c4f2f73c is in state SUCCESS 2025-05-19 14:42:07.688647 | orchestrator | 2025-05-19 14:42:07 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:07.688854 | orchestrator | 2025-05-19 14:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:10.744221 | orchestrator | 2025-05-19 14:42:10 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:10.744552 | orchestrator | 2025-05-19 14:42:10 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:10.744589 | orchestrator | 2025-05-19 14:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:13.790330 | orchestrator | 2025-05-19 14:42:13 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:13.794265 | orchestrator | 2025-05-19 14:42:13 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:13.794338 | orchestrator | 2025-05-19 14:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:16.844624 | orchestrator | 2025-05-19 14:42:16 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:16.846445 | orchestrator | 2025-05-19 14:42:16 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:16.846870 | orchestrator | 2025-05-19 14:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:19.893166 | orchestrator | 2025-05-19 14:42:19 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:19.897548 | orchestrator | 2025-05-19 14:42:19 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:19.897597 | orchestrator | 2025-05-19 14:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:22.947186 | orchestrator | 2025-05-19 14:42:22 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:22.947457 | orchestrator | 2025-05-19 14:42:22 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:22.947482 | orchestrator | 2025-05-19 14:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:25.998670 | orchestrator | 2025-05-19 14:42:25 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:26.001120 | orchestrator | 2025-05-19 14:42:26 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:26.001176 | orchestrator | 2025-05-19 14:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:29.043500 | orchestrator | 2025-05-19 14:42:29 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:29.043775 | orchestrator | 2025-05-19 14:42:29 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:29.043798 | orchestrator | 2025-05-19 14:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:32.085662 | orchestrator | 2025-05-19 14:42:32 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:32.085777 | orchestrator | 2025-05-19 14:42:32 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:32.085804 | orchestrator | 2025-05-19 14:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:35.137409 | orchestrator | 2025-05-19 14:42:35 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:35.140160 | orchestrator | 2025-05-19 14:42:35 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:35.140808 | orchestrator | 2025-05-19 14:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:38.179409 | orchestrator | 2025-05-19 14:42:38 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:38.181158 | orchestrator | 2025-05-19 14:42:38 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:38.181214 | orchestrator | 2025-05-19 14:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:41.238605 | orchestrator | 2025-05-19 14:42:41 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:41.240065 | orchestrator | 2025-05-19 14:42:41 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:41.240116 | orchestrator | 2025-05-19 14:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:44.284594 | orchestrator | 2025-05-19 14:42:44 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:44.285528 | orchestrator | 2025-05-19 14:42:44 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:44.285575 | orchestrator | 2025-05-19 14:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:47.330684 | orchestrator | 2025-05-19 14:42:47 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:47.331575 | orchestrator | 2025-05-19 14:42:47 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:47.331610 | orchestrator | 2025-05-19 14:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:50.374866 | orchestrator | 2025-05-19 14:42:50 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:50.374972 | orchestrator | 2025-05-19 14:42:50 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:50.375146 | orchestrator | 2025-05-19 14:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:53.423356 | orchestrator | 2025-05-19 14:42:53 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:53.426341 | orchestrator | 2025-05-19 14:42:53 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:53.426469 | orchestrator | 2025-05-19 14:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:56.481490 | orchestrator | 2025-05-19 14:42:56 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:56.483770 | orchestrator | 2025-05-19 14:42:56 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state STARTED 2025-05-19 14:42:56.483814 | orchestrator | 2025-05-19 14:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:42:59.542295 | orchestrator | 2025-05-19 14:42:59 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:42:59.554682 | orchestrator | 2025-05-19 14:42:59.554741 | orchestrator | None 2025-05-19 14:42:59.554748 | orchestrator | 2025-05-19 14:42:59.554755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:42:59.554762 | orchestrator | 2025-05-19 14:42:59.554768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:42:59.554775 | orchestrator | Monday 19 May 2025 14:37:01 +0000 (0:00:00.389) 0:00:00.389 ************ 2025-05-19 14:42:59.554781 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.554787 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.554793 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.554799 | orchestrator | 2025-05-19 14:42:59.554805 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:42:59.554816 | orchestrator | Monday 19 May 2025 14:37:02 +0000 (0:00:00.442) 0:00:00.832 ************ 2025-05-19 14:42:59.554822 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-19 14:42:59.554828 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-19 14:42:59.554834 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-19 14:42:59.554839 | orchestrator | 2025-05-19 14:42:59.554845 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-19 14:42:59.554850 | orchestrator | 2025-05-19 14:42:59.554856 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 14:42:59.554861 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:00.884) 0:00:01.720 ************ 2025-05-19 14:42:59.554867 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.554873 | orchestrator | 2025-05-19 14:42:59.554879 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-19 14:42:59.554884 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:00.720) 0:00:02.441 ************ 2025-05-19 14:42:59.554890 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.554895 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.554901 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.554906 | orchestrator | 2025-05-19 14:42:59.554912 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 14:42:59.554918 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:01.159) 0:00:03.601 ************ 2025-05-19 14:42:59.554923 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.554961 | orchestrator | 2025-05-19 14:42:59.554967 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-19 14:42:59.554973 | orchestrator | Monday 19 May 2025 14:37:06 +0000 (0:00:01.271) 0:00:04.872 ************ 2025-05-19 14:42:59.554978 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.554984 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.554989 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.555097 | orchestrator | 2025-05-19 14:42:59.555105 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-19 14:42:59.555126 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:01.037) 0:00:05.910 ************ 2025-05-19 14:42:59.555132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555154 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555160 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 14:42:59.555165 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 14:42:59.555172 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 14:42:59.555177 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 14:42:59.555183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 14:42:59.555188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 14:42:59.555194 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 14:42:59.555217 | orchestrator | 2025-05-19 14:42:59.555222 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 14:42:59.555228 | orchestrator | Monday 19 May 2025 14:37:09 +0000 (0:00:02.554) 0:00:08.465 ************ 2025-05-19 14:42:59.555234 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 14:42:59.555240 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 14:42:59.555245 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 14:42:59.555251 | orchestrator | 2025-05-19 14:42:59.555256 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 14:42:59.555262 | orchestrator | Monday 19 May 2025 14:37:10 +0000 (0:00:00.818) 0:00:09.283 ************ 2025-05-19 14:42:59.555268 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 14:42:59.555274 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 14:42:59.555281 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 14:42:59.555287 | orchestrator | 2025-05-19 14:42:59.555294 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 14:42:59.555300 | orchestrator | Monday 19 May 2025 14:37:12 +0000 (0:00:01.395) 0:00:10.678 ************ 2025-05-19 14:42:59.555307 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-19 14:42:59.555313 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.555332 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-19 14:42:59.555339 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.555345 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-19 14:42:59.555351 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.555357 | orchestrator | 2025-05-19 14:42:59.555364 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-19 14:42:59.555370 | orchestrator | Monday 19 May 2025 14:37:13 +0000 (0:00:01.176) 0:00:11.855 ************ 2025-05-19 14:42:59.555383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555471 | orchestrator | 2025-05-19 14:42:59.555477 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-19 14:42:59.555482 | orchestrator | Monday 19 May 2025 14:37:15 +0000 (0:00:02.002) 0:00:13.857 ************ 2025-05-19 14:42:59.555488 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.555493 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.555499 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.555504 | orchestrator | 2025-05-19 14:42:59.555510 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-19 14:42:59.555515 | orchestrator | Monday 19 May 2025 14:37:16 +0000 (0:00:01.000) 0:00:14.857 ************ 2025-05-19 14:42:59.555521 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-19 14:42:59.555526 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-19 14:42:59.555531 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-19 14:42:59.555536 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-19 14:42:59.555542 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-19 14:42:59.555547 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-19 14:42:59.555552 | orchestrator | 2025-05-19 14:42:59.555558 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-19 14:42:59.555563 | orchestrator | Monday 19 May 2025 14:37:18 +0000 (0:00:01.802) 0:00:16.660 ************ 2025-05-19 14:42:59.555569 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.555574 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.555579 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.555584 | orchestrator | 2025-05-19 14:42:59.555590 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-19 14:42:59.555595 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:00.977) 0:00:17.638 ************ 2025-05-19 14:42:59.555601 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.555607 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.555612 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.555617 | orchestrator | 2025-05-19 14:42:59.555623 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-19 14:42:59.555628 | orchestrator | Monday 19 May 2025 14:37:20 +0000 (0:00:01.292) 0:00:18.930 ************ 2025-05-19 14:42:59.555634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.555644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.555656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.555668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555673 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.555678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.555692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.555717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.555723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.555728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555744 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.555749 | orchestrator | 2025-05-19 14:42:59.555754 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-19 14:42:59.555759 | orchestrator | Monday 19 May 2025 14:37:21 +0000 (0:00:01.346) 0:00:20.277 ************ 2025-05-19 14:42:59.555764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.555861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea', '__omit_place_holder__a2629033ab060c732b671ba3ed4aa49672c892ea'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 14:42:59.555866 | orchestrator | 2025-05-19 14:42:59.555871 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-19 14:42:59.555876 | orchestrator | Monday 19 May 2025 14:37:24 +0000 (0:00:03.181) 0:00:23.458 ************ 2025-05-19 14:42:59.555881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.555923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.555942 | orchestrator | 2025-05-19 14:42:59.555947 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-19 14:42:59.555951 | orchestrator | Monday 19 May 2025 14:37:27 +0000 (0:00:03.112) 0:00:26.571 ************ 2025-05-19 14:42:59.555956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 14:42:59.555962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 14:42:59.555966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 14:42:59.555971 | orchestrator | 2025-05-19 14:42:59.555976 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-19 14:42:59.555981 | orchestrator | Monday 19 May 2025 14:37:30 +0000 (0:00:03.026) 0:00:29.597 ************ 2025-05-19 14:42:59.555985 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 14:42:59.555990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 14:42:59.556321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 14:42:59.556335 | orchestrator | 2025-05-19 14:42:59.556340 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-19 14:42:59.556345 | orchestrator | Monday 19 May 2025 14:37:35 +0000 (0:00:04.068) 0:00:33.666 ************ 2025-05-19 14:42:59.556349 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.556354 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.556359 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.556364 | orchestrator | 2025-05-19 14:42:59.556369 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-19 14:42:59.556374 | orchestrator | Monday 19 May 2025 14:37:35 +0000 (0:00:00.593) 0:00:34.259 ************ 2025-05-19 14:42:59.556382 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 14:42:59.556388 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 14:42:59.556393 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 14:42:59.556398 | orchestrator | 2025-05-19 14:42:59.556403 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-19 14:42:59.556408 | orchestrator | Monday 19 May 2025 14:37:38 +0000 (0:00:02.993) 0:00:37.253 ************ 2025-05-19 14:42:59.556412 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 14:42:59.556417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 14:42:59.556422 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 14:42:59.556427 | orchestrator | 2025-05-19 14:42:59.556432 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-19 14:42:59.556436 | orchestrator | Monday 19 May 2025 14:37:40 +0000 (0:00:02.219) 0:00:39.473 ************ 2025-05-19 14:42:59.556441 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-19 14:42:59.556453 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-19 14:42:59.556458 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-19 14:42:59.556462 | orchestrator | 2025-05-19 14:42:59.556467 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-19 14:42:59.556472 | orchestrator | Monday 19 May 2025 14:37:42 +0000 (0:00:01.790) 0:00:41.263 ************ 2025-05-19 14:42:59.556477 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-19 14:42:59.556482 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-19 14:42:59.556486 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-19 14:42:59.556491 | orchestrator | 2025-05-19 14:42:59.556496 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 14:42:59.556501 | orchestrator | Monday 19 May 2025 14:37:44 +0000 (0:00:01.584) 0:00:42.848 ************ 2025-05-19 14:42:59.556505 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.556510 | orchestrator | 2025-05-19 14:42:59.556515 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-19 14:42:59.556520 | orchestrator | Monday 19 May 2025 14:37:45 +0000 (0:00:01.020) 0:00:43.868 ************ 2025-05-19 14:42:59.556525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.556570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.556575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.556580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.556585 | orchestrator | 2025-05-19 14:42:59.556590 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-19 14:42:59.556595 | orchestrator | Monday 19 May 2025 14:37:48 +0000 (0:00:03.241) 0:00:47.110 ************ 2025-05-19 14:42:59.556604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556627 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.556632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556647 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.556684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556742 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.556747 | orchestrator | 2025-05-19 14:42:59.556752 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-19 14:42:59.556757 | orchestrator | Monday 19 May 2025 14:37:49 +0000 (0:00:00.596) 0:00:47.706 ************ 2025-05-19 14:42:59.556762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556777 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.556782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556808 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.556813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556828 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.556833 | orchestrator | 2025-05-19 14:42:59.556838 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-19 14:42:59.556878 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:01.196) 0:00:48.903 ************ 2025-05-19 14:42:59.556884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556909 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.556914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556929 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.556934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.556947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.556957 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.556968 | orchestrator | 2025-05-19 14:42:59.556974 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-19 14:42:59.556979 | orchestrator | Monday 19 May 2025 14:37:51 +0000 (0:00:01.020) 0:00:49.924 ************ 2025-05-19 14:42:59.556988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.556994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557046 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557073 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557078 | orchestrator | 2025-05-19 14:42:59.557084 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-19 14:42:59.557089 | orchestrator | Monday 19 May 2025 14:37:52 +0000 (0:00:01.660) 0:00:51.584 ************ 2025-05-19 14:42:59.557095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557158 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557189 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557215 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557221 | orchestrator | 2025-05-19 14:42:59.557226 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-19 14:42:59.557232 | orchestrator | Monday 19 May 2025 14:37:54 +0000 (0:00:01.046) 0:00:52.630 ************ 2025-05-19 14:42:59.557238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557261 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557287 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557330 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557335 | orchestrator | 2025-05-19 14:42:59.557340 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-19 14:42:59.557345 | orchestrator | Monday 19 May 2025 14:37:54 +0000 (0:00:00.486) 0:00:53.117 ************ 2025-05-19 14:42:59.557350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557368 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557409 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557454 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557475 | orchestrator | 2025-05-19 14:42:59.557480 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-19 14:42:59.557485 | orchestrator | Monday 19 May 2025 14:37:54 +0000 (0:00:00.409) 0:00:53.527 ************ 2025-05-19 14:42:59.557490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557505 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557563 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 14:42:59.557573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 14:42:59.557578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 14:42:59.557583 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557588 | orchestrator | 2025-05-19 14:42:59.557593 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-19 14:42:59.557597 | orchestrator | Monday 19 May 2025 14:37:55 +0000 (0:00:01.052) 0:00:54.579 ************ 2025-05-19 14:42:59.557602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 14:42:59.557607 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 14:42:59.557615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 14:42:59.557620 | orchestrator | 2025-05-19 14:42:59.557624 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-19 14:42:59.557629 | orchestrator | Monday 19 May 2025 14:37:57 +0000 (0:00:01.358) 0:00:55.937 ************ 2025-05-19 14:42:59.557634 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 14:42:59.557639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 14:42:59.557646 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 14:42:59.557651 | orchestrator | 2025-05-19 14:42:59.557655 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-19 14:42:59.557660 | orchestrator | Monday 19 May 2025 14:37:58 +0000 (0:00:01.284) 0:00:57.222 ************ 2025-05-19 14:42:59.557665 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:42:59.557670 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:42:59.557674 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:42:59.557682 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:42:59.557687 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.557706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:42:59.557711 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.557716 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:42:59.557721 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.557726 | orchestrator | 2025-05-19 14:42:59.557731 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-19 14:42:59.557735 | orchestrator | Monday 19 May 2025 14:37:59 +0000 (0:00:01.067) 0:00:58.289 ************ 2025-05-19 14:42:59.557740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 14:42:59.557783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.557788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.557793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 14:42:59.557798 | orchestrator | 2025-05-19 14:42:59.557803 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-19 14:42:59.557807 | orchestrator | Monday 19 May 2025 14:38:02 +0000 (0:00:02.554) 0:01:00.844 ************ 2025-05-19 14:42:59.557812 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.557817 | orchestrator | 2025-05-19 14:42:59.557822 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-19 14:42:59.557826 | orchestrator | Monday 19 May 2025 14:38:02 +0000 (0:00:00.540) 0:01:01.385 ************ 2025-05-19 14:42:59.557835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 14:42:59.557844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.557852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.557857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.557862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', '2025-05-19 14:42:59 | INFO  | Task 05a6690b-c40a-4e02-b5ac-a4b4c590a84c is in state SUCCESS 2025-05-19 14:42:59.557996 | orchestrator | '], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 14:42:59.558004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.558009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 14:42:59.558164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.558174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558184 | orchestrator | 2025-05-19 14:42:59.558189 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-19 14:42:59.558194 | orchestrator | Monday 19 May 2025 14:38:06 +0000 (0:00:03.679) 0:01:05.064 ************ 2025-05-19 14:42:59.558199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 14:42:59.558260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.558269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558279 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.558289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 14:42:59.558294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.558300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558314 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.558322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 14:42:59.558327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.558348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558361 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.558366 | orchestrator | 2025-05-19 14:42:59.558371 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-19 14:42:59.558376 | orchestrator | Monday 19 May 2025 14:38:07 +0000 (0:00:00.790) 0:01:05.855 ************ 2025-05-19 14:42:59.558381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558397 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.558401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558411 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.558416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 14:42:59.558426 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.558430 | orchestrator | 2025-05-19 14:42:59.558435 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-19 14:42:59.558440 | orchestrator | Monday 19 May 2025 14:38:08 +0000 (0:00:00.934) 0:01:06.789 ************ 2025-05-19 14:42:59.558445 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.558449 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.558454 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.558459 | orchestrator | 2025-05-19 14:42:59.558464 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-19 14:42:59.558471 | orchestrator | Monday 19 May 2025 14:38:09 +0000 (0:00:01.161) 0:01:07.950 ************ 2025-05-19 14:42:59.558476 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.558481 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.558485 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.558490 | orchestrator | 2025-05-19 14:42:59.558495 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-19 14:42:59.558500 | orchestrator | Monday 19 May 2025 14:38:11 +0000 (0:00:01.736) 0:01:09.687 ************ 2025-05-19 14:42:59.558504 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.558509 | orchestrator | 2025-05-19 14:42:59.558514 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-19 14:42:59.558518 | orchestrator | Monday 19 May 2025 14:38:11 +0000 (0:00:00.667) 0:01:10.355 ************ 2025-05-19 14:42:59.558524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.558534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.558556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.558574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558588 | orchestrator | 2025-05-19 14:42:59.558593 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-19 14:42:59.558597 | orchestrator | Monday 19 May 2025 14:38:16 +0000 (0:00:04.598) 0:01:14.953 ************ 2025-05-19 14:42:59.558607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.558613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558623 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.558631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.558640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.558645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558658 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.558663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.558676 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.558681 | orchestrator | 2025-05-19 14:42:59.558686 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-19 14:42:59.558690 | orchestrator | Monday 19 May 2025 14:38:17 +0000 (0:00:01.005) 0:01:15.959 ************ 2025-05-19 14:42:59.558698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.558714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558724 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.558729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 14:42:59.558739 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.558744 | orchestrator | 2025-05-19 14:42:59.558748 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-19 14:42:59.558753 | orchestrator | Monday 19 May 2025 14:38:18 +0000 (0:00:00.793) 0:01:16.752 ************ 2025-05-19 14:42:59.558758 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.558763 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.558767 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.558772 | orchestrator | 2025-05-19 14:42:59.558777 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-19 14:42:59.558782 | orchestrator | Monday 19 May 2025 14:38:19 +0000 (0:00:01.692) 0:01:18.445 ************ 2025-05-19 14:42:59.558787 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.558791 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.558796 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.558801 | orchestrator | 2025-05-19 14:42:59.558821 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-19 14:42:59.558827 | orchestrator | Monday 19 May 2025 14:38:21 +0000 (0:00:01.955) 0:01:20.401 ************ 2025-05-19 14:42:59.558832 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.558836 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.558841 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.558846 | orchestrator | 2025-05-19 14:42:59.558853 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-19 14:42:59.558858 | orchestrator | Monday 19 May 2025 14:38:22 +0000 (0:00:00.303) 0:01:20.704 ************ 2025-05-19 14:42:59.558863 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.558868 | orchestrator | 2025-05-19 14:42:59.558872 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-19 14:42:59.558910 | orchestrator | Monday 19 May 2025 14:38:22 +0000 (0:00:00.699) 0:01:21.404 ************ 2025-05-19 14:42:59.558920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 14:42:59.559759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 14:42:59.559782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 14:42:59.559788 | orchestrator | 2025-05-19 14:42:59.559793 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-19 14:42:59.559798 | orchestrator | Monday 19 May 2025 14:38:25 +0000 (0:00:03.094) 0:01:24.498 ************ 2025-05-19 14:42:59.559803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 14:42:59.559808 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.559817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 14:42:59.559829 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.559834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 14:42:59.559838 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.559843 | orchestrator | 2025-05-19 14:42:59.559847 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-19 14:42:59.559852 | orchestrator | Monday 19 May 2025 14:38:27 +0000 (0:00:02.116) 0:01:26.615 ************ 2025-05-19 14:42:59.559862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559879 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.559884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559888 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.559893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 14:42:59.559908 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.559912 | orchestrator | 2025-05-19 14:42:59.559917 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-19 14:42:59.559921 | orchestrator | Monday 19 May 2025 14:38:29 +0000 (0:00:01.932) 0:01:28.547 ************ 2025-05-19 14:42:59.559926 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.559930 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.559935 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.559939 | orchestrator | 2025-05-19 14:42:59.559944 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-19 14:42:59.559948 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.742) 0:01:29.290 ************ 2025-05-19 14:42:59.559952 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.559957 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.559961 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.559966 | orchestrator | 2025-05-19 14:42:59.559970 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-19 14:42:59.559975 | orchestrator | Monday 19 May 2025 14:38:31 +0000 (0:00:00.894) 0:01:30.185 ************ 2025-05-19 14:42:59.559979 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.559984 | orchestrator | 2025-05-19 14:42:59.559988 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-19 14:42:59.559993 | orchestrator | Monday 19 May 2025 14:38:32 +0000 (0:00:00.795) 0:01:30.981 ************ 2025-05-19 14:42:59.560001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.560007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.560047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.560111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560129 | orchestrator | 2025-05-19 14:42:59.560134 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-19 14:42:59.560138 | orchestrator | Monday 19 May 2025 14:38:36 +0000 (0:00:03.675) 0:01:34.656 ************ 2025-05-19 14:42:59.560143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.560151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560167 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.560181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560214 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.560224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560250 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560254 | orchestrator | 2025-05-19 14:42:59.560259 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-19 14:42:59.560264 | orchestrator | Monday 19 May 2025 14:38:37 +0000 (0:00:01.376) 0:01:36.033 ************ 2025-05-19 14:42:59.560269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560291 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560296 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 14:42:59.560309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560314 | orchestrator | 2025-05-19 14:42:59.560338 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-19 14:42:59.560343 | orchestrator | Monday 19 May 2025 14:38:39 +0000 (0:00:01.754) 0:01:37.788 ************ 2025-05-19 14:42:59.560347 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.560352 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.560356 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.560361 | orchestrator | 2025-05-19 14:42:59.560365 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-19 14:42:59.560370 | orchestrator | Monday 19 May 2025 14:38:40 +0000 (0:00:01.512) 0:01:39.301 ************ 2025-05-19 14:42:59.560374 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.560379 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.560383 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.560388 | orchestrator | 2025-05-19 14:42:59.560392 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-19 14:42:59.560397 | orchestrator | Monday 19 May 2025 14:38:42 +0000 (0:00:02.159) 0:01:41.460 ************ 2025-05-19 14:42:59.560406 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560411 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560415 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560420 | orchestrator | 2025-05-19 14:42:59.560424 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-19 14:42:59.560429 | orchestrator | Monday 19 May 2025 14:38:43 +0000 (0:00:00.440) 0:01:41.900 ************ 2025-05-19 14:42:59.560436 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560440 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560445 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560449 | orchestrator | 2025-05-19 14:42:59.560454 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-19 14:42:59.560458 | orchestrator | Monday 19 May 2025 14:38:43 +0000 (0:00:00.423) 0:01:42.324 ************ 2025-05-19 14:42:59.560463 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.560467 | orchestrator | 2025-05-19 14:42:59.560472 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-19 14:42:59.560476 | orchestrator | Monday 19 May 2025 14:38:44 +0000 (0:00:00.772) 0:01:43.097 ************ 2025-05-19 14:42:59.560481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:42:59.560487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:42:59.560532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:42:59.560573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560609 | orchestrator | 2025-05-19 14:42:59.560614 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-19 14:42:59.560619 | orchestrator | Monday 19 May 2025 14:38:48 +0000 (0:00:04.210) 0:01:47.308 ************ 2025-05-19 14:42:59.560626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:42:59.560634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:42:59.560662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560702 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:42:59.560726 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:42:59.560738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.560766 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560770 | orchestrator | 2025-05-19 14:42:59.560775 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-19 14:42:59.560780 | orchestrator | Monday 19 May 2025 14:38:49 +0000 (0:00:00.883) 0:01:48.192 ************ 2025-05-19 14:42:59.560785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560799 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560808 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 14:42:59.560824 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560829 | orchestrator | 2025-05-19 14:42:59.560833 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-19 14:42:59.560838 | orchestrator | Monday 19 May 2025 14:38:50 +0000 (0:00:01.055) 0:01:49.248 ************ 2025-05-19 14:42:59.560842 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.560847 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.560851 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.560856 | orchestrator | 2025-05-19 14:42:59.560860 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-19 14:42:59.560865 | orchestrator | Monday 19 May 2025 14:38:52 +0000 (0:00:01.430) 0:01:50.678 ************ 2025-05-19 14:42:59.560869 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.560873 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.560878 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.560882 | orchestrator | 2025-05-19 14:42:59.560887 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-19 14:42:59.560891 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:01.573) 0:01:52.251 ************ 2025-05-19 14:42:59.560896 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.560903 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.560907 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.560912 | orchestrator | 2025-05-19 14:42:59.560916 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-19 14:42:59.560920 | orchestrator | Monday 19 May 2025 14:38:53 +0000 (0:00:00.238) 0:01:52.489 ************ 2025-05-19 14:42:59.560925 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.560929 | orchestrator | 2025-05-19 14:42:59.560934 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-19 14:42:59.560938 | orchestrator | Monday 19 May 2025 14:38:54 +0000 (0:00:00.714) 0:01:53.204 ************ 2025-05-19 14:42:59.560946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:42:59.560956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.560967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:42:59.561110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.561121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:42:59.561135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.561140 | orchestrator | 2025-05-19 14:42:59.561145 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-19 14:42:59.561150 | orchestrator | Monday 19 May 2025 14:38:58 +0000 (0:00:03.910) 0:01:57.114 ************ 2025-05-19 14:42:59.561155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:42:59.561168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.561173 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:42:59.561192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.561198 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:42:59.561216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.561222 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561226 | orchestrator | 2025-05-19 14:42:59.561231 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-19 14:42:59.561235 | orchestrator | Monday 19 May 2025 14:39:00 +0000 (0:00:02.400) 0:01:59.515 ************ 2025-05-19 14:42:59.561240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561253 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561270 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 14:42:59.561286 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561291 | orchestrator | 2025-05-19 14:42:59.561295 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-19 14:42:59.561300 | orchestrator | Monday 19 May 2025 14:39:03 +0000 (0:00:02.544) 0:02:02.059 ************ 2025-05-19 14:42:59.561304 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.561309 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.561314 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.561318 | orchestrator | 2025-05-19 14:42:59.561323 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-19 14:42:59.561327 | orchestrator | Monday 19 May 2025 14:39:04 +0000 (0:00:01.313) 0:02:03.373 ************ 2025-05-19 14:42:59.561332 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.561336 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.561340 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.561345 | orchestrator | 2025-05-19 14:42:59.561349 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-19 14:42:59.561354 | orchestrator | Monday 19 May 2025 14:39:06 +0000 (0:00:01.821) 0:02:05.194 ************ 2025-05-19 14:42:59.561358 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561367 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561372 | orchestrator | 2025-05-19 14:42:59.561376 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-19 14:42:59.561381 | orchestrator | Monday 19 May 2025 14:39:06 +0000 (0:00:00.304) 0:02:05.499 ************ 2025-05-19 14:42:59.561385 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.561390 | orchestrator | 2025-05-19 14:42:59.561394 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-19 14:42:59.561402 | orchestrator | Monday 19 May 2025 14:39:07 +0000 (0:00:00.798) 0:02:06.297 ************ 2025-05-19 14:42:59.561408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:42:59.561414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:42:59.561419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:42:59.561423 | orchestrator | 2025-05-19 14:42:59.561428 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-19 14:42:59.561432 | orchestrator | Monday 19 May 2025 14:39:11 +0000 (0:00:03.453) 0:02:09.751 ************ 2025-05-19 14:42:59.561439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:42:59.561444 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:42:59.561453 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:42:59.561466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561470 | orchestrator | 2025-05-19 14:42:59.561475 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-19 14:42:59.561479 | orchestrator | Monday 19 May 2025 14:39:11 +0000 (0:00:00.339) 0:02:10.090 ************ 2025-05-19 14:42:59.561486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561509 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 14:42:59.561523 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561527 | orchestrator | 2025-05-19 14:42:59.561532 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-19 14:42:59.561536 | orchestrator | Monday 19 May 2025 14:39:12 +0000 (0:00:00.581) 0:02:10.672 ************ 2025-05-19 14:42:59.561541 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.561545 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.561550 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.561554 | orchestrator | 2025-05-19 14:42:59.561559 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-19 14:42:59.561563 | orchestrator | Monday 19 May 2025 14:39:13 +0000 (0:00:01.430) 0:02:12.102 ************ 2025-05-19 14:42:59.561568 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.561572 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.561577 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.561581 | orchestrator | 2025-05-19 14:42:59.561586 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-19 14:42:59.561590 | orchestrator | Monday 19 May 2025 14:39:15 +0000 (0:00:01.940) 0:02:14.042 ************ 2025-05-19 14:42:59.561595 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561599 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561604 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561608 | orchestrator | 2025-05-19 14:42:59.561613 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-19 14:42:59.561617 | orchestrator | Monday 19 May 2025 14:39:15 +0000 (0:00:00.284) 0:02:14.327 ************ 2025-05-19 14:42:59.561625 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.561629 | orchestrator | 2025-05-19 14:42:59.561636 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-19 14:42:59.561641 | orchestrator | Monday 19 May 2025 14:39:16 +0000 (0:00:00.850) 0:02:15.177 ************ 2025-05-19 14:42:59.561649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:42:59.561657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:42:59.561669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:42:59.561675 | orchestrator | 2025-05-19 14:42:59.561680 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-19 14:42:59.561684 | orchestrator | Monday 19 May 2025 14:39:19 +0000 (0:00:03.407) 0:02:18.584 ************ 2025-05-19 14:42:59.561689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:42:59.561699 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:42:59.561712 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.561744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:42:59.561757 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.561762 | orchestrator | 2025-05-19 14:42:59.561767 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-19 14:42:59.561773 | orchestrator | Monday 19 May 2025 14:39:20 +0000 (0:00:00.661) 0:02:19.246 ************ 2025-05-19 14:42:59.561779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.561794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.561806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 14:42:59.561811 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.561817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.561830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.561849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 14:42:59.561855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.561860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 14:42:59.561865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 14:42:59.562111 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 14:42:59.562125 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562165 | orchestrator | 2025-05-19 14:42:59.562217 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-19 14:42:59.562223 | orchestrator | Monday 19 May 2025 14:39:21 +0000 (0:00:00.900) 0:02:20.146 ************ 2025-05-19 14:42:59.562227 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.562232 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.562236 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.562240 | orchestrator | 2025-05-19 14:42:59.562245 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-19 14:42:59.562249 | orchestrator | Monday 19 May 2025 14:39:23 +0000 (0:00:01.504) 0:02:21.651 ************ 2025-05-19 14:42:59.562253 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.562257 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.562262 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.562266 | orchestrator | 2025-05-19 14:42:59.562270 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-19 14:42:59.562274 | orchestrator | Monday 19 May 2025 14:39:24 +0000 (0:00:01.913) 0:02:23.564 ************ 2025-05-19 14:42:59.562285 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562289 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562293 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562298 | orchestrator | 2025-05-19 14:42:59.562302 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-19 14:42:59.562306 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:00.312) 0:02:23.876 ************ 2025-05-19 14:42:59.562310 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562314 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562319 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562323 | orchestrator | 2025-05-19 14:42:59.562327 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-19 14:42:59.562331 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:00.503) 0:02:24.380 ************ 2025-05-19 14:42:59.562335 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.562339 | orchestrator | 2025-05-19 14:42:59.562344 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-19 14:42:59.562348 | orchestrator | Monday 19 May 2025 14:39:27 +0000 (0:00:01.961) 0:02:26.341 ************ 2025-05-19 14:42:59.562357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:42:59.562363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:42:59.562372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:42:59.562401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562412 | orchestrator | 2025-05-19 14:42:59.562420 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-19 14:42:59.562424 | orchestrator | Monday 19 May 2025 14:39:31 +0000 (0:00:03.646) 0:02:29.987 ************ 2025-05-19 14:42:59.562429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:42:59.562434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562445 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:42:59.562454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562469 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:42:59.562599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:42:59.562608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:42:59.562612 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562617 | orchestrator | 2025-05-19 14:42:59.562621 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-19 14:42:59.562625 | orchestrator | Monday 19 May 2025 14:39:31 +0000 (0:00:00.528) 0:02:30.516 ************ 2025-05-19 14:42:59.562630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562643 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562668 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 14:42:59.562681 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562685 | orchestrator | 2025-05-19 14:42:59.562689 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-19 14:42:59.562693 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.889) 0:02:31.406 ************ 2025-05-19 14:42:59.562697 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.562701 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.562705 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.562709 | orchestrator | 2025-05-19 14:42:59.562713 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-19 14:42:59.562717 | orchestrator | Monday 19 May 2025 14:39:34 +0000 (0:00:01.228) 0:02:32.634 ************ 2025-05-19 14:42:59.562721 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.562725 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.562729 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.562733 | orchestrator | 2025-05-19 14:42:59.562737 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-19 14:42:59.562741 | orchestrator | Monday 19 May 2025 14:39:35 +0000 (0:00:01.799) 0:02:34.434 ************ 2025-05-19 14:42:59.562745 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562749 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.562753 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.562757 | orchestrator | 2025-05-19 14:42:59.562761 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-19 14:42:59.562765 | orchestrator | Monday 19 May 2025 14:39:36 +0000 (0:00:00.251) 0:02:34.686 ************ 2025-05-19 14:42:59.562770 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.562774 | orchestrator | 2025-05-19 14:42:59.562778 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-19 14:42:59.562782 | orchestrator | Monday 19 May 2025 14:39:37 +0000 (0:00:01.014) 0:02:35.700 ************ 2025-05-19 14:42:59.562830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:42:59.562840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.562848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:42:59.562852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.562896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:42:59.562904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.562912 | orchestrator | 2025-05-19 14:42:59.562916 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-19 14:42:59.562921 | orchestrator | Monday 19 May 2025 14:39:41 +0000 (0:00:04.001) 0:02:39.701 ************ 2025-05-19 14:42:59.562925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:42:59.562933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.562938 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.562942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:42:59.562969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563119 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.563128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:42:59.563132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563136 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.563141 | orchestrator | 2025-05-19 14:42:59.563145 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-19 14:42:59.563149 | orchestrator | Monday 19 May 2025 14:39:41 +0000 (0:00:00.625) 0:02:40.327 ************ 2025-05-19 14:42:59.563164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563173 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.563177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563186 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.563190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 14:42:59.563198 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.563203 | orchestrator | 2025-05-19 14:42:59.563207 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-19 14:42:59.563211 | orchestrator | Monday 19 May 2025 14:39:42 +0000 (0:00:01.194) 0:02:41.522 ************ 2025-05-19 14:42:59.563237 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.563243 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.563250 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.563255 | orchestrator | 2025-05-19 14:42:59.563259 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-19 14:42:59.563263 | orchestrator | Monday 19 May 2025 14:39:44 +0000 (0:00:01.205) 0:02:42.728 ************ 2025-05-19 14:42:59.563268 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.563272 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.563276 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.563280 | orchestrator | 2025-05-19 14:42:59.563285 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-19 14:42:59.563289 | orchestrator | Monday 19 May 2025 14:39:45 +0000 (0:00:01.764) 0:02:44.493 ************ 2025-05-19 14:42:59.563293 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.563298 | orchestrator | 2025-05-19 14:42:59.563302 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-19 14:42:59.563306 | orchestrator | Monday 19 May 2025 14:39:46 +0000 (0:00:00.950) 0:02:45.444 ************ 2025-05-19 14:42:59.563314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 14:42:59.563318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 14:42:59.563367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 14:42:59.563372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563411 | orchestrator | 2025-05-19 14:42:59.563415 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-19 14:42:59.563419 | orchestrator | Monday 19 May 2025 14:39:50 +0000 (0:00:03.324) 0:02:48.768 ************ 2025-05-19 14:42:59.563426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 14:42:59.563430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563479 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.563519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 14:42:59.563524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 14:42:59.563536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563655 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.563660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.563668 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.563672 | orchestrator | 2025-05-19 14:42:59.563677 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-19 14:42:59.563685 | orchestrator | Monday 19 May 2025 14:39:51 +0000 (0:00:00.899) 0:02:49.667 ************ 2025-05-19 14:42:59.563689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563747 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.563751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563759 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.563763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 14:42:59.563771 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.563775 | orchestrator | 2025-05-19 14:42:59.563780 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-19 14:42:59.563784 | orchestrator | Monday 19 May 2025 14:39:51 +0000 (0:00:00.798) 0:02:50.466 ************ 2025-05-19 14:42:59.563788 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.563795 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.563800 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.563804 | orchestrator | 2025-05-19 14:42:59.563808 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-19 14:42:59.563812 | orchestrator | Monday 19 May 2025 14:39:53 +0000 (0:00:01.486) 0:02:51.952 ************ 2025-05-19 14:42:59.563826 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.563830 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.563834 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.563838 | orchestrator | 2025-05-19 14:42:59.563842 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-19 14:42:59.563847 | orchestrator | Monday 19 May 2025 14:39:55 +0000 (0:00:02.040) 0:02:53.993 ************ 2025-05-19 14:42:59.563851 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.563855 | orchestrator | 2025-05-19 14:42:59.563859 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-19 14:42:59.563863 | orchestrator | Monday 19 May 2025 14:39:56 +0000 (0:00:01.344) 0:02:55.338 ************ 2025-05-19 14:42:59.563867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:42:59.563871 | orchestrator | 2025-05-19 14:42:59.563875 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-19 14:42:59.563879 | orchestrator | Monday 19 May 2025 14:39:59 +0000 (0:00:02.982) 0:02:58.320 ************ 2025-05-19 14:42:59.563887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.563892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.563896 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.563914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.563919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.563923 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.563930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.563950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.563955 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.563960 | orchestrator | 2025-05-19 14:42:59.563964 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-19 14:42:59.564005 | orchestrator | Monday 19 May 2025 14:40:02 +0000 (0:00:02.690) 0:03:01.010 ************ 2025-05-19 14:42:59.564011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.564033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.564039 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.564110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:42:59.564251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.564262 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 14:42:59.564275 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564279 | orchestrator | 2025-05-19 14:42:59.564283 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-19 14:42:59.564287 | orchestrator | Monday 19 May 2025 14:40:04 +0000 (0:00:02.084) 0:03:03.095 ************ 2025-05-19 14:42:59.564302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564311 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564324 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 14:42:59.564344 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564348 | orchestrator | 2025-05-19 14:42:59.564352 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-19 14:42:59.564356 | orchestrator | Monday 19 May 2025 14:40:06 +0000 (0:00:02.293) 0:03:05.388 ************ 2025-05-19 14:42:59.564360 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.564364 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.564368 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.564372 | orchestrator | 2025-05-19 14:42:59.564376 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-19 14:42:59.564380 | orchestrator | Monday 19 May 2025 14:40:08 +0000 (0:00:02.016) 0:03:07.404 ************ 2025-05-19 14:42:59.564384 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564389 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564393 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564397 | orchestrator | 2025-05-19 14:42:59.564401 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-19 14:42:59.564405 | orchestrator | Monday 19 May 2025 14:40:10 +0000 (0:00:01.342) 0:03:08.747 ************ 2025-05-19 14:42:59.564409 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564417 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564421 | orchestrator | 2025-05-19 14:42:59.564425 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-19 14:42:59.564429 | orchestrator | Monday 19 May 2025 14:40:10 +0000 (0:00:00.296) 0:03:09.043 ************ 2025-05-19 14:42:59.564442 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.564446 | orchestrator | 2025-05-19 14:42:59.564451 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-19 14:42:59.564455 | orchestrator | Monday 19 May 2025 14:40:11 +0000 (0:00:01.049) 0:03:10.093 ************ 2025-05-19 14:42:59.564459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 14:42:59.564464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 14:42:59.564473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 14:42:59.564478 | orchestrator | 2025-05-19 14:42:59.564482 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-19 14:42:59.564486 | orchestrator | Monday 19 May 2025 14:40:13 +0000 (0:00:01.617) 0:03:11.710 ************ 2025-05-19 14:42:59.564491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 14:42:59.564495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 14:42:59.564513 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 14:42:59.564521 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564525 | orchestrator | 2025-05-19 14:42:59.564529 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-19 14:42:59.564534 | orchestrator | Monday 19 May 2025 14:40:13 +0000 (0:00:00.363) 0:03:12.074 ************ 2025-05-19 14:42:59.564538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 14:42:59.564545 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 14:42:59.564604 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 14:42:59.564613 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564617 | orchestrator | 2025-05-19 14:42:59.564621 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-19 14:42:59.564625 | orchestrator | Monday 19 May 2025 14:40:13 +0000 (0:00:00.540) 0:03:12.614 ************ 2025-05-19 14:42:59.564653 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564658 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564826 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564833 | orchestrator | 2025-05-19 14:42:59.564837 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-19 14:42:59.564842 | orchestrator | Monday 19 May 2025 14:40:14 +0000 (0:00:00.687) 0:03:13.301 ************ 2025-05-19 14:42:59.564846 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564854 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564859 | orchestrator | 2025-05-19 14:42:59.564863 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-19 14:42:59.564867 | orchestrator | Monday 19 May 2025 14:40:15 +0000 (0:00:01.195) 0:03:14.497 ************ 2025-05-19 14:42:59.564871 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.564876 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.564880 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.564884 | orchestrator | 2025-05-19 14:42:59.564889 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-19 14:42:59.564893 | orchestrator | Monday 19 May 2025 14:40:16 +0000 (0:00:00.307) 0:03:14.804 ************ 2025-05-19 14:42:59.564897 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.564901 | orchestrator | 2025-05-19 14:42:59.564906 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-19 14:42:59.564910 | orchestrator | Monday 19 May 2025 14:40:17 +0000 (0:00:01.366) 0:03:16.171 ************ 2025-05-19 14:42:59.564926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:42:59.564932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:42:59.564949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.564981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.564998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.565059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:42:59.565277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.565308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565383 | orchestrator | 2025-05-19 14:42:59.565505 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-19 14:42:59.565511 | orchestrator | Monday 19 May 2025 14:40:21 +0000 (0:00:04.075) 0:03:20.246 ************ 2025-05-19 14:42:59.565515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:42:59.565522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.565576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:42:59.565594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:42:59.565669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.565680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 14:42:59.565775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565826 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.565830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 14:42:59.565888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 14:42:59.565916 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.565922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:42:59.565926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.565930 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.565959 | orchestrator | 2025-05-19 14:42:59.565964 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-19 14:42:59.565968 | orchestrator | Monday 19 May 2025 14:40:23 +0000 (0:00:01.511) 0:03:21.758 ************ 2025-05-19 14:42:59.565972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.565976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.565980 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.565993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.565997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.566001 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.566004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.566008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 14:42:59.566012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.566046 | orchestrator | 2025-05-19 14:42:59.566051 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-19 14:42:59.566055 | orchestrator | Monday 19 May 2025 14:40:25 +0000 (0:00:02.157) 0:03:23.916 ************ 2025-05-19 14:42:59.566058 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.566062 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.566066 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.566069 | orchestrator | 2025-05-19 14:42:59.566073 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-19 14:42:59.566077 | orchestrator | Monday 19 May 2025 14:40:26 +0000 (0:00:01.245) 0:03:25.162 ************ 2025-05-19 14:42:59.566081 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.566084 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.566088 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.566092 | orchestrator | 2025-05-19 14:42:59.566096 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-19 14:42:59.566099 | orchestrator | Monday 19 May 2025 14:40:28 +0000 (0:00:01.901) 0:03:27.064 ************ 2025-05-19 14:42:59.566103 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.566107 | orchestrator | 2025-05-19 14:42:59.566110 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-19 14:42:59.566114 | orchestrator | Monday 19 May 2025 14:40:29 +0000 (0:00:01.159) 0:03:28.223 ************ 2025-05-19 14:42:59.566121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566151 | orchestrator | 2025-05-19 14:42:59.566155 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-19 14:42:59.566158 | orchestrator | Monday 19 May 2025 14:40:33 +0000 (0:00:03.563) 0:03:31.787 ************ 2025-05-19 14:42:59.566162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566166 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.566172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566176 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.566180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566184 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.566188 | orchestrator | 2025-05-19 14:42:59.566671 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-19 14:42:59.566687 | orchestrator | Monday 19 May 2025 14:40:33 +0000 (0:00:00.469) 0:03:32.256 ************ 2025-05-19 14:42:59.566691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.566752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566761 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.566765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 14:42:59.566773 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.566777 | orchestrator | 2025-05-19 14:42:59.566780 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-19 14:42:59.566784 | orchestrator | Monday 19 May 2025 14:40:34 +0000 (0:00:00.777) 0:03:33.034 ************ 2025-05-19 14:42:59.566788 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.566792 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.566795 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.566799 | orchestrator | 2025-05-19 14:42:59.566803 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-19 14:42:59.566807 | orchestrator | Monday 19 May 2025 14:40:36 +0000 (0:00:01.741) 0:03:34.776 ************ 2025-05-19 14:42:59.566810 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.566814 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.566818 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.566822 | orchestrator | 2025-05-19 14:42:59.566826 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-19 14:42:59.566830 | orchestrator | Monday 19 May 2025 14:40:38 +0000 (0:00:02.090) 0:03:36.866 ************ 2025-05-19 14:42:59.566833 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.566837 | orchestrator | 2025-05-19 14:42:59.566841 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-19 14:42:59.566845 | orchestrator | Monday 19 May 2025 14:40:39 +0000 (0:00:01.192) 0:03:38.058 ************ 2025-05-19 14:42:59.566852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.566914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566923 | orchestrator | 2025-05-19 14:42:59.566927 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-19 14:42:59.566931 | orchestrator | Monday 19 May 2025 14:40:43 +0000 (0:00:04.547) 0:03:42.606 ************ 2025-05-19 14:42:59.566937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566952 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.566967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.566980 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.566986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.566992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567011 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567030 | orchestrator | 2025-05-19 14:42:59.567034 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-19 14:42:59.567038 | orchestrator | Monday 19 May 2025 14:40:44 +0000 (0:00:00.977) 0:03:43.584 ************ 2025-05-19 14:42:59.567042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567058 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567090 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 14:42:59.567101 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567105 | orchestrator | 2025-05-19 14:42:59.567109 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-19 14:42:59.567113 | orchestrator | Monday 19 May 2025 14:40:45 +0000 (0:00:01.030) 0:03:44.614 ************ 2025-05-19 14:42:59.567117 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.567120 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.567124 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.567128 | orchestrator | 2025-05-19 14:42:59.567132 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-19 14:42:59.567135 | orchestrator | Monday 19 May 2025 14:40:47 +0000 (0:00:01.722) 0:03:46.337 ************ 2025-05-19 14:42:59.567139 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.567143 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.567146 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.567150 | orchestrator | 2025-05-19 14:42:59.567154 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-19 14:42:59.567158 | orchestrator | Monday 19 May 2025 14:40:50 +0000 (0:00:02.323) 0:03:48.660 ************ 2025-05-19 14:42:59.567161 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.567165 | orchestrator | 2025-05-19 14:42:59.567169 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-19 14:42:59.567184 | orchestrator | Monday 19 May 2025 14:40:51 +0000 (0:00:01.744) 0:03:50.405 ************ 2025-05-19 14:42:59.567189 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-19 14:42:59.567193 | orchestrator | 2025-05-19 14:42:59.567196 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-19 14:42:59.567200 | orchestrator | Monday 19 May 2025 14:40:52 +0000 (0:00:01.041) 0:03:51.447 ************ 2025-05-19 14:42:59.567204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 14:42:59.567208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 14:42:59.567215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 14:42:59.567219 | orchestrator | 2025-05-19 14:42:59.567222 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-19 14:42:59.567226 | orchestrator | Monday 19 May 2025 14:40:56 +0000 (0:00:03.950) 0:03:55.397 ************ 2025-05-19 14:42:59.567232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567236 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567244 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567251 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567255 | orchestrator | 2025-05-19 14:42:59.567259 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-19 14:42:59.567263 | orchestrator | Monday 19 May 2025 14:40:58 +0000 (0:00:01.380) 0:03:56.778 ************ 2025-05-19 14:42:59.567277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567286 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567300 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 14:42:59.567312 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567316 | orchestrator | 2025-05-19 14:42:59.567320 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 14:42:59.567323 | orchestrator | Monday 19 May 2025 14:40:59 +0000 (0:00:01.824) 0:03:58.603 ************ 2025-05-19 14:42:59.567327 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.567331 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.567334 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.567338 | orchestrator | 2025-05-19 14:42:59.567342 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 14:42:59.567346 | orchestrator | Monday 19 May 2025 14:41:02 +0000 (0:00:02.441) 0:04:01.044 ************ 2025-05-19 14:42:59.567349 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.567353 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.567357 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.567360 | orchestrator | 2025-05-19 14:42:59.567364 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-19 14:42:59.567368 | orchestrator | Monday 19 May 2025 14:41:05 +0000 (0:00:02.765) 0:04:03.809 ************ 2025-05-19 14:42:59.567372 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-19 14:42:59.567376 | orchestrator | 2025-05-19 14:42:59.567383 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-19 14:42:59.567387 | orchestrator | Monday 19 May 2025 14:41:06 +0000 (0:00:00.862) 0:04:04.672 ************ 2025-05-19 14:42:59.567392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567396 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567405 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567427 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567431 | orchestrator | 2025-05-19 14:42:59.567435 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-19 14:42:59.567440 | orchestrator | Monday 19 May 2025 14:41:07 +0000 (0:00:01.205) 0:04:05.877 ************ 2025-05-19 14:42:59.567444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567449 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567457 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 14:42:59.567466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567470 | orchestrator | 2025-05-19 14:42:59.567474 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-19 14:42:59.567479 | orchestrator | Monday 19 May 2025 14:41:08 +0000 (0:00:01.672) 0:04:07.549 ************ 2025-05-19 14:42:59.567483 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567487 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567491 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567495 | orchestrator | 2025-05-19 14:42:59.567501 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 14:42:59.567506 | orchestrator | Monday 19 May 2025 14:41:10 +0000 (0:00:01.207) 0:04:08.756 ************ 2025-05-19 14:42:59.567510 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.567514 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.567518 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.567522 | orchestrator | 2025-05-19 14:42:59.567526 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 14:42:59.567530 | orchestrator | Monday 19 May 2025 14:41:12 +0000 (0:00:02.292) 0:04:11.049 ************ 2025-05-19 14:42:59.567535 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.567539 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.567543 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.567547 | orchestrator | 2025-05-19 14:42:59.567551 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-19 14:42:59.567555 | orchestrator | Monday 19 May 2025 14:41:15 +0000 (0:00:03.118) 0:04:14.167 ************ 2025-05-19 14:42:59.567559 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-19 14:42:59.567566 | orchestrator | 2025-05-19 14:42:59.567570 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-19 14:42:59.567574 | orchestrator | Monday 19 May 2025 14:41:16 +0000 (0:00:01.063) 0:04:15.231 ************ 2025-05-19 14:42:59.567578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567583 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567603 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567611 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567615 | orchestrator | 2025-05-19 14:42:59.567620 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-19 14:42:59.567624 | orchestrator | Monday 19 May 2025 14:41:17 +0000 (0:00:01.026) 0:04:16.258 ************ 2025-05-19 14:42:59.567628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567633 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567641 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 14:42:59.567654 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567658 | orchestrator | 2025-05-19 14:42:59.567662 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-19 14:42:59.567667 | orchestrator | Monday 19 May 2025 14:41:18 +0000 (0:00:01.187) 0:04:17.445 ************ 2025-05-19 14:42:59.567671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567675 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567679 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567683 | orchestrator | 2025-05-19 14:42:59.567688 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 14:42:59.567692 | orchestrator | Monday 19 May 2025 14:41:20 +0000 (0:00:01.713) 0:04:19.158 ************ 2025-05-19 14:42:59.567696 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.567700 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.567704 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.567708 | orchestrator | 2025-05-19 14:42:59.567712 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 14:42:59.567716 | orchestrator | Monday 19 May 2025 14:41:22 +0000 (0:00:02.136) 0:04:21.294 ************ 2025-05-19 14:42:59.567721 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.567725 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.567729 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.567733 | orchestrator | 2025-05-19 14:42:59.567738 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-19 14:42:59.567742 | orchestrator | Monday 19 May 2025 14:41:25 +0000 (0:00:03.026) 0:04:24.321 ************ 2025-05-19 14:42:59.567746 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.567750 | orchestrator | 2025-05-19 14:42:59.567753 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-19 14:42:59.567757 | orchestrator | Monday 19 May 2025 14:41:27 +0000 (0:00:01.364) 0:04:25.686 ************ 2025-05-19 14:42:59.567772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.567777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.567789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.567835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567865 | orchestrator | 2025-05-19 14:42:59.567869 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-19 14:42:59.567873 | orchestrator | Monday 19 May 2025 14:41:30 +0000 (0:00:03.791) 0:04:29.477 ************ 2025-05-19 14:42:59.567879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.567883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567911 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.567915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.567921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567956 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.567961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.567968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 14:42:59.567972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 14:42:59.567983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:42:59.567987 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.567990 | orchestrator | 2025-05-19 14:42:59.567994 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-19 14:42:59.567998 | orchestrator | Monday 19 May 2025 14:41:31 +0000 (0:00:00.680) 0:04:30.158 ************ 2025-05-19 14:42:59.568002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568009 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568070 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 14:42:59.568085 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568088 | orchestrator | 2025-05-19 14:42:59.568092 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-19 14:42:59.568096 | orchestrator | Monday 19 May 2025 14:41:32 +0000 (0:00:00.920) 0:04:31.079 ************ 2025-05-19 14:42:59.568099 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.568103 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.568107 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.568111 | orchestrator | 2025-05-19 14:42:59.568114 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-19 14:42:59.568118 | orchestrator | Monday 19 May 2025 14:41:34 +0000 (0:00:01.814) 0:04:32.893 ************ 2025-05-19 14:42:59.568122 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.568125 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.568129 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.568133 | orchestrator | 2025-05-19 14:42:59.568137 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-19 14:42:59.568140 | orchestrator | Monday 19 May 2025 14:41:36 +0000 (0:00:02.016) 0:04:34.910 ************ 2025-05-19 14:42:59.568144 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.568148 | orchestrator | 2025-05-19 14:42:59.568151 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-19 14:42:59.568155 | orchestrator | Monday 19 May 2025 14:41:37 +0000 (0:00:01.284) 0:04:36.194 ************ 2025-05-19 14:42:59.568161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:42:59.568166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:42:59.568180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:42:59.568187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:42:59.568192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:42:59.568199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:42:59.568204 | orchestrator | 2025-05-19 14:42:59.568207 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-19 14:42:59.568211 | orchestrator | Monday 19 May 2025 14:41:42 +0000 (0:00:05.270) 0:04:41.464 ************ 2025-05-19 14:42:59.568225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:42:59.568232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:42:59.568236 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:42:59.568246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:42:59.568250 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:42:59.568271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:42:59.568275 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568279 | orchestrator | 2025-05-19 14:42:59.568282 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-19 14:42:59.568286 | orchestrator | Monday 19 May 2025 14:41:43 +0000 (0:00:00.964) 0:04:42.428 ************ 2025-05-19 14:42:59.568290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 14:42:59.568294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568302 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 14:42:59.568311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568319 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 14:42:59.568326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 14:42:59.568337 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568340 | orchestrator | 2025-05-19 14:42:59.568344 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-19 14:42:59.568348 | orchestrator | Monday 19 May 2025 14:41:44 +0000 (0:00:00.855) 0:04:43.284 ************ 2025-05-19 14:42:59.568352 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568355 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568359 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568363 | orchestrator | 2025-05-19 14:42:59.568366 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-19 14:42:59.568370 | orchestrator | Monday 19 May 2025 14:41:45 +0000 (0:00:00.402) 0:04:43.686 ************ 2025-05-19 14:42:59.568374 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568378 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568381 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568385 | orchestrator | 2025-05-19 14:42:59.568399 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-19 14:42:59.568403 | orchestrator | Monday 19 May 2025 14:41:46 +0000 (0:00:01.311) 0:04:44.998 ************ 2025-05-19 14:42:59.568407 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.568411 | orchestrator | 2025-05-19 14:42:59.568415 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-19 14:42:59.568418 | orchestrator | Monday 19 May 2025 14:41:48 +0000 (0:00:01.639) 0:04:46.637 ************ 2025-05-19 14:42:59.568422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:42:59.568426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:42:59.568439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:42:59.568487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:42:59.568520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:42:59.568546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:42:59.568559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568593 | orchestrator | 2025-05-19 14:42:59.568597 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-19 14:42:59.568601 | orchestrator | Monday 19 May 2025 14:41:52 +0000 (0:00:04.282) 0:04:50.920 ************ 2025-05-19 14:42:59.568605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 14:42:59.568609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 14:42:59.568635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568653 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 14:42:59.568663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 14:42:59.568669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:42:59.568677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 14:42:59.568713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 14:42:59.568717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 14:42:59.568729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568744 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:42:59.568754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:42:59.568761 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568765 | orchestrator | 2025-05-19 14:42:59.568769 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-19 14:42:59.568773 | orchestrator | Monday 19 May 2025 14:41:53 +0000 (0:00:01.689) 0:04:52.610 ************ 2025-05-19 14:42:59.568777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568797 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568815 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 14:42:59.568826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 14:42:59.568840 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568843 | orchestrator | 2025-05-19 14:42:59.568847 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-19 14:42:59.568851 | orchestrator | Monday 19 May 2025 14:41:55 +0000 (0:00:01.675) 0:04:54.285 ************ 2025-05-19 14:42:59.568855 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568858 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568866 | orchestrator | 2025-05-19 14:42:59.568869 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-19 14:42:59.568873 | orchestrator | Monday 19 May 2025 14:41:56 +0000 (0:00:00.413) 0:04:54.698 ************ 2025-05-19 14:42:59.568877 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568880 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568884 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568888 | orchestrator | 2025-05-19 14:42:59.568891 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-19 14:42:59.568895 | orchestrator | Monday 19 May 2025 14:41:57 +0000 (0:00:01.405) 0:04:56.104 ************ 2025-05-19 14:42:59.568899 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.568902 | orchestrator | 2025-05-19 14:42:59.568906 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-19 14:42:59.568910 | orchestrator | Monday 19 May 2025 14:41:58 +0000 (0:00:01.464) 0:04:57.569 ************ 2025-05-19 14:42:59.568914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:42:59.568920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:42:59.568924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 14:42:59.568930 | orchestrator | 2025-05-19 14:42:59.568936 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-19 14:42:59.568940 | orchestrator | Monday 19 May 2025 14:42:01 +0000 (0:00:02.385) 0:04:59.954 ************ 2025-05-19 14:42:59.568944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 14:42:59.568948 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 14:42:59.568956 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 14:42:59.568966 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.568970 | orchestrator | 2025-05-19 14:42:59.568973 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-19 14:42:59.568980 | orchestrator | Monday 19 May 2025 14:42:01 +0000 (0:00:00.322) 0:05:00.277 ************ 2025-05-19 14:42:59.568984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 14:42:59.568988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 14:42:59.568991 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.568995 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.568999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 14:42:59.569002 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569006 | orchestrator | 2025-05-19 14:42:59.569010 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-19 14:42:59.569026 | orchestrator | Monday 19 May 2025 14:42:02 +0000 (0:00:00.746) 0:05:01.023 ************ 2025-05-19 14:42:59.569032 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569035 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569039 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569043 | orchestrator | 2025-05-19 14:42:59.569047 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-19 14:42:59.569050 | orchestrator | Monday 19 May 2025 14:42:02 +0000 (0:00:00.367) 0:05:01.390 ************ 2025-05-19 14:42:59.569054 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569058 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569061 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569068 | orchestrator | 2025-05-19 14:42:59.569071 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-19 14:42:59.569075 | orchestrator | Monday 19 May 2025 14:42:03 +0000 (0:00:01.066) 0:05:02.457 ************ 2025-05-19 14:42:59.569079 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:42:59.569082 | orchestrator | 2025-05-19 14:42:59.569086 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-19 14:42:59.569090 | orchestrator | Monday 19 May 2025 14:42:05 +0000 (0:00:01.553) 0:05:04.010 ************ 2025-05-19 14:42:59.569094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 14:42:59.569127 | orchestrator | 2025-05-19 14:42:59.569130 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-19 14:42:59.569136 | orchestrator | Monday 19 May 2025 14:42:10 +0000 (0:00:05.372) 0:05:09.383 ************ 2025-05-19 14:42:59.569140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569149 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569163 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 14:42:59.569177 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569181 | orchestrator | 2025-05-19 14:42:59.569184 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-19 14:42:59.569190 | orchestrator | Monday 19 May 2025 14:42:11 +0000 (0:00:00.602) 0:05:09.985 ************ 2025-05-19 14:42:59.569194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569209 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569231 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 14:42:59.569251 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569255 | orchestrator | 2025-05-19 14:42:59.569259 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-19 14:42:59.569263 | orchestrator | Monday 19 May 2025 14:42:12 +0000 (0:00:01.482) 0:05:11.468 ************ 2025-05-19 14:42:59.569267 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.569270 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.569274 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.569278 | orchestrator | 2025-05-19 14:42:59.569281 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-19 14:42:59.569285 | orchestrator | Monday 19 May 2025 14:42:14 +0000 (0:00:01.251) 0:05:12.719 ************ 2025-05-19 14:42:59.569289 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.569293 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.569296 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.569300 | orchestrator | 2025-05-19 14:42:59.569304 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-19 14:42:59.569307 | orchestrator | Monday 19 May 2025 14:42:16 +0000 (0:00:02.088) 0:05:14.807 ************ 2025-05-19 14:42:59.569311 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569315 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569318 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569322 | orchestrator | 2025-05-19 14:42:59.569326 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-19 14:42:59.569330 | orchestrator | Monday 19 May 2025 14:42:16 +0000 (0:00:00.284) 0:05:15.091 ************ 2025-05-19 14:42:59.569333 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569337 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569341 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569344 | orchestrator | 2025-05-19 14:42:59.569348 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-19 14:42:59.569353 | orchestrator | Monday 19 May 2025 14:42:16 +0000 (0:00:00.279) 0:05:15.371 ************ 2025-05-19 14:42:59.569357 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569361 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569365 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569369 | orchestrator | 2025-05-19 14:42:59.569372 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-19 14:42:59.569376 | orchestrator | Monday 19 May 2025 14:42:17 +0000 (0:00:00.599) 0:05:15.970 ************ 2025-05-19 14:42:59.569380 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569386 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569390 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569394 | orchestrator | 2025-05-19 14:42:59.569397 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-19 14:42:59.569401 | orchestrator | Monday 19 May 2025 14:42:17 +0000 (0:00:00.308) 0:05:16.279 ************ 2025-05-19 14:42:59.569405 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569408 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569412 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569416 | orchestrator | 2025-05-19 14:42:59.569420 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-19 14:42:59.569423 | orchestrator | Monday 19 May 2025 14:42:17 +0000 (0:00:00.297) 0:05:16.576 ************ 2025-05-19 14:42:59.569427 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569431 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569434 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569438 | orchestrator | 2025-05-19 14:42:59.569442 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-19 14:42:59.569445 | orchestrator | Monday 19 May 2025 14:42:18 +0000 (0:00:00.750) 0:05:17.326 ************ 2025-05-19 14:42:59.569449 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569453 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569456 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569460 | orchestrator | 2025-05-19 14:42:59.569464 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-19 14:42:59.569468 | orchestrator | Monday 19 May 2025 14:42:19 +0000 (0:00:00.621) 0:05:17.948 ************ 2025-05-19 14:42:59.569471 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569475 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569479 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569482 | orchestrator | 2025-05-19 14:42:59.569486 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-19 14:42:59.569490 | orchestrator | Monday 19 May 2025 14:42:19 +0000 (0:00:00.313) 0:05:18.261 ************ 2025-05-19 14:42:59.569493 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569497 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569501 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569504 | orchestrator | 2025-05-19 14:42:59.569508 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-19 14:42:59.569512 | orchestrator | Monday 19 May 2025 14:42:20 +0000 (0:00:00.863) 0:05:19.125 ************ 2025-05-19 14:42:59.569516 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569519 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569523 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569527 | orchestrator | 2025-05-19 14:42:59.569530 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-19 14:42:59.569534 | orchestrator | Monday 19 May 2025 14:42:21 +0000 (0:00:01.155) 0:05:20.280 ************ 2025-05-19 14:42:59.569538 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569541 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569545 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569549 | orchestrator | 2025-05-19 14:42:59.569555 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-19 14:42:59.569559 | orchestrator | Monday 19 May 2025 14:42:22 +0000 (0:00:00.845) 0:05:21.126 ************ 2025-05-19 14:42:59.569562 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.569566 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.569570 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.569573 | orchestrator | 2025-05-19 14:42:59.569577 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-19 14:42:59.569581 | orchestrator | Monday 19 May 2025 14:42:26 +0000 (0:00:04.429) 0:05:25.555 ************ 2025-05-19 14:42:59.569585 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569588 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569592 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569596 | orchestrator | 2025-05-19 14:42:59.569604 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-19 14:42:59.569608 | orchestrator | Monday 19 May 2025 14:42:30 +0000 (0:00:03.756) 0:05:29.312 ************ 2025-05-19 14:42:59.569611 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.569615 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.569619 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.569622 | orchestrator | 2025-05-19 14:42:59.569626 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-19 14:42:59.569630 | orchestrator | Monday 19 May 2025 14:42:44 +0000 (0:00:13.974) 0:05:43.287 ************ 2025-05-19 14:42:59.569633 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569637 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569641 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569644 | orchestrator | 2025-05-19 14:42:59.569648 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-19 14:42:59.569652 | orchestrator | Monday 19 May 2025 14:42:45 +0000 (0:00:00.713) 0:05:44.000 ************ 2025-05-19 14:42:59.569655 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:42:59.569659 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:42:59.569663 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:42:59.569666 | orchestrator | 2025-05-19 14:42:59.569670 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-19 14:42:59.569674 | orchestrator | Monday 19 May 2025 14:42:49 +0000 (0:00:04.526) 0:05:48.526 ************ 2025-05-19 14:42:59.569677 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569681 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569685 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569688 | orchestrator | 2025-05-19 14:42:59.569692 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-19 14:42:59.569696 | orchestrator | Monday 19 May 2025 14:42:50 +0000 (0:00:00.335) 0:05:48.861 ************ 2025-05-19 14:42:59.569699 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569705 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569709 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569712 | orchestrator | 2025-05-19 14:42:59.569716 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-19 14:42:59.569720 | orchestrator | Monday 19 May 2025 14:42:50 +0000 (0:00:00.646) 0:05:49.508 ************ 2025-05-19 14:42:59.569723 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569727 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569730 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569734 | orchestrator | 2025-05-19 14:42:59.569738 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-19 14:42:59.569741 | orchestrator | Monday 19 May 2025 14:42:51 +0000 (0:00:00.329) 0:05:49.838 ************ 2025-05-19 14:42:59.569745 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569749 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569756 | orchestrator | 2025-05-19 14:42:59.569760 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-19 14:42:59.569763 | orchestrator | Monday 19 May 2025 14:42:51 +0000 (0:00:00.300) 0:05:50.139 ************ 2025-05-19 14:42:59.569767 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569771 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569774 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569778 | orchestrator | 2025-05-19 14:42:59.569782 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-19 14:42:59.569785 | orchestrator | Monday 19 May 2025 14:42:51 +0000 (0:00:00.318) 0:05:50.458 ************ 2025-05-19 14:42:59.569789 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:42:59.569793 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:42:59.569796 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:42:59.569800 | orchestrator | 2025-05-19 14:42:59.569804 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-19 14:42:59.569810 | orchestrator | Monday 19 May 2025 14:42:52 +0000 (0:00:00.617) 0:05:51.075 ************ 2025-05-19 14:42:59.569814 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569818 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569822 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569825 | orchestrator | 2025-05-19 14:42:59.569829 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-19 14:42:59.569833 | orchestrator | Monday 19 May 2025 14:42:57 +0000 (0:00:04.724) 0:05:55.800 ************ 2025-05-19 14:42:59.569836 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:42:59.569840 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:42:59.569844 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:42:59.569847 | orchestrator | 2025-05-19 14:42:59.569851 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:42:59.569855 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 14:42:59.569859 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 14:42:59.569863 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 14:42:59.569866 | orchestrator | 2025-05-19 14:42:59.569870 | orchestrator | 2025-05-19 14:42:59.569876 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:42:59.569880 | orchestrator | Monday 19 May 2025 14:42:58 +0000 (0:00:00.924) 0:05:56.724 ************ 2025-05-19 14:42:59.569884 | orchestrator | =============================================================================== 2025-05-19 14:42:59.569887 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.97s 2025-05-19 14:42:59.569891 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.37s 2025-05-19 14:42:59.569895 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.27s 2025-05-19 14:42:59.569898 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.72s 2025-05-19 14:42:59.569902 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.60s 2025-05-19 14:42:59.569906 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.55s 2025-05-19 14:42:59.569909 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.53s 2025-05-19 14:42:59.569913 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.43s 2025-05-19 14:42:59.569917 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.28s 2025-05-19 14:42:59.569921 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.21s 2025-05-19 14:42:59.569924 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.08s 2025-05-19 14:42:59.569928 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.07s 2025-05-19 14:42:59.569932 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.00s 2025-05-19 14:42:59.569935 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.95s 2025-05-19 14:42:59.569939 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.91s 2025-05-19 14:42:59.569943 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.79s 2025-05-19 14:42:59.569946 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.76s 2025-05-19 14:42:59.569950 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.68s 2025-05-19 14:42:59.569954 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.68s 2025-05-19 14:42:59.569957 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.65s 2025-05-19 14:42:59.569963 | orchestrator | 2025-05-19 14:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:02.628101 | orchestrator | 2025-05-19 14:43:02 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:02.628504 | orchestrator | 2025-05-19 14:43:02 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:02.629497 | orchestrator | 2025-05-19 14:43:02 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:02.629520 | orchestrator | 2025-05-19 14:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:05.687559 | orchestrator | 2025-05-19 14:43:05 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:05.689210 | orchestrator | 2025-05-19 14:43:05 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:05.690723 | orchestrator | 2025-05-19 14:43:05 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:05.690985 | orchestrator | 2025-05-19 14:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:08.731400 | orchestrator | 2025-05-19 14:43:08 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:08.731836 | orchestrator | 2025-05-19 14:43:08 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:08.732538 | orchestrator | 2025-05-19 14:43:08 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:08.732730 | orchestrator | 2025-05-19 14:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:11.768075 | orchestrator | 2025-05-19 14:43:11 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:11.768616 | orchestrator | 2025-05-19 14:43:11 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:11.769311 | orchestrator | 2025-05-19 14:43:11 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:11.770915 | orchestrator | 2025-05-19 14:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:14.813049 | orchestrator | 2025-05-19 14:43:14 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:14.813345 | orchestrator | 2025-05-19 14:43:14 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:14.816739 | orchestrator | 2025-05-19 14:43:14 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:14.816830 | orchestrator | 2025-05-19 14:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:17.853213 | orchestrator | 2025-05-19 14:43:17 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:17.853533 | orchestrator | 2025-05-19 14:43:17 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:17.856469 | orchestrator | 2025-05-19 14:43:17 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:17.856498 | orchestrator | 2025-05-19 14:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:20.881347 | orchestrator | 2025-05-19 14:43:20 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:20.881441 | orchestrator | 2025-05-19 14:43:20 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:20.884682 | orchestrator | 2025-05-19 14:43:20 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:20.884719 | orchestrator | 2025-05-19 14:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:23.914808 | orchestrator | 2025-05-19 14:43:23 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:23.915665 | orchestrator | 2025-05-19 14:43:23 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:23.915681 | orchestrator | 2025-05-19 14:43:23 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:23.915687 | orchestrator | 2025-05-19 14:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:26.954072 | orchestrator | 2025-05-19 14:43:26 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:26.957382 | orchestrator | 2025-05-19 14:43:26 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:26.957937 | orchestrator | 2025-05-19 14:43:26 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:26.958120 | orchestrator | 2025-05-19 14:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:30.004812 | orchestrator | 2025-05-19 14:43:30 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:30.006077 | orchestrator | 2025-05-19 14:43:30 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:30.008478 | orchestrator | 2025-05-19 14:43:30 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:30.008521 | orchestrator | 2025-05-19 14:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:33.060861 | orchestrator | 2025-05-19 14:43:33 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:33.060969 | orchestrator | 2025-05-19 14:43:33 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:33.061611 | orchestrator | 2025-05-19 14:43:33 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:33.061638 | orchestrator | 2025-05-19 14:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:36.111570 | orchestrator | 2025-05-19 14:43:36 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:36.111690 | orchestrator | 2025-05-19 14:43:36 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:36.112920 | orchestrator | 2025-05-19 14:43:36 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:36.113007 | orchestrator | 2025-05-19 14:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:39.168994 | orchestrator | 2025-05-19 14:43:39 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:39.170282 | orchestrator | 2025-05-19 14:43:39 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:39.172299 | orchestrator | 2025-05-19 14:43:39 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:39.172352 | orchestrator | 2025-05-19 14:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:42.221378 | orchestrator | 2025-05-19 14:43:42 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:42.223262 | orchestrator | 2025-05-19 14:43:42 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:42.225764 | orchestrator | 2025-05-19 14:43:42 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:42.226298 | orchestrator | 2025-05-19 14:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:45.280273 | orchestrator | 2025-05-19 14:43:45 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:45.280411 | orchestrator | 2025-05-19 14:43:45 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:45.280426 | orchestrator | 2025-05-19 14:43:45 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:45.280438 | orchestrator | 2025-05-19 14:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:48.316249 | orchestrator | 2025-05-19 14:43:48 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:48.318422 | orchestrator | 2025-05-19 14:43:48 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:48.318505 | orchestrator | 2025-05-19 14:43:48 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:48.318521 | orchestrator | 2025-05-19 14:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:51.373307 | orchestrator | 2025-05-19 14:43:51 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:51.374942 | orchestrator | 2025-05-19 14:43:51 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:51.376583 | orchestrator | 2025-05-19 14:43:51 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:51.377204 | orchestrator | 2025-05-19 14:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:54.428552 | orchestrator | 2025-05-19 14:43:54 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:54.430833 | orchestrator | 2025-05-19 14:43:54 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:54.433330 | orchestrator | 2025-05-19 14:43:54 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:54.433526 | orchestrator | 2025-05-19 14:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:43:57.473981 | orchestrator | 2025-05-19 14:43:57 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:43:57.477042 | orchestrator | 2025-05-19 14:43:57 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:43:57.478348 | orchestrator | 2025-05-19 14:43:57 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:43:57.478624 | orchestrator | 2025-05-19 14:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:00.526551 | orchestrator | 2025-05-19 14:44:00 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:00.526662 | orchestrator | 2025-05-19 14:44:00 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:00.526678 | orchestrator | 2025-05-19 14:44:00 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:00.526690 | orchestrator | 2025-05-19 14:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:03.584386 | orchestrator | 2025-05-19 14:44:03 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:03.586923 | orchestrator | 2025-05-19 14:44:03 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:03.590840 | orchestrator | 2025-05-19 14:44:03 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:03.590890 | orchestrator | 2025-05-19 14:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:06.648724 | orchestrator | 2025-05-19 14:44:06 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:06.649014 | orchestrator | 2025-05-19 14:44:06 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:06.652910 | orchestrator | 2025-05-19 14:44:06 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:06.652946 | orchestrator | 2025-05-19 14:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:09.701894 | orchestrator | 2025-05-19 14:44:09 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:09.703537 | orchestrator | 2025-05-19 14:44:09 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:09.705439 | orchestrator | 2025-05-19 14:44:09 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:09.705933 | orchestrator | 2025-05-19 14:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:12.757278 | orchestrator | 2025-05-19 14:44:12 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:12.759924 | orchestrator | 2025-05-19 14:44:12 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:12.763908 | orchestrator | 2025-05-19 14:44:12 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:12.763982 | orchestrator | 2025-05-19 14:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:15.821883 | orchestrator | 2025-05-19 14:44:15 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:15.823856 | orchestrator | 2025-05-19 14:44:15 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:15.826541 | orchestrator | 2025-05-19 14:44:15 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:15.826641 | orchestrator | 2025-05-19 14:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:18.881868 | orchestrator | 2025-05-19 14:44:18 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:18.883261 | orchestrator | 2025-05-19 14:44:18 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:18.885106 | orchestrator | 2025-05-19 14:44:18 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:18.885137 | orchestrator | 2025-05-19 14:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:21.927731 | orchestrator | 2025-05-19 14:44:21 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:21.929849 | orchestrator | 2025-05-19 14:44:21 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:21.931448 | orchestrator | 2025-05-19 14:44:21 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:21.931477 | orchestrator | 2025-05-19 14:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:24.975364 | orchestrator | 2025-05-19 14:44:24 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:24.977065 | orchestrator | 2025-05-19 14:44:24 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:24.979474 | orchestrator | 2025-05-19 14:44:24 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:24.979514 | orchestrator | 2025-05-19 14:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:28.035780 | orchestrator | 2025-05-19 14:44:28 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:28.038913 | orchestrator | 2025-05-19 14:44:28 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:28.041024 | orchestrator | 2025-05-19 14:44:28 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:28.041438 | orchestrator | 2025-05-19 14:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:31.098929 | orchestrator | 2025-05-19 14:44:31 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:31.105448 | orchestrator | 2025-05-19 14:44:31 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:31.109961 | orchestrator | 2025-05-19 14:44:31 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:31.110071 | orchestrator | 2025-05-19 14:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:34.154463 | orchestrator | 2025-05-19 14:44:34 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:34.156032 | orchestrator | 2025-05-19 14:44:34 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:34.158170 | orchestrator | 2025-05-19 14:44:34 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:34.158213 | orchestrator | 2025-05-19 14:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:37.209144 | orchestrator | 2025-05-19 14:44:37 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:37.210371 | orchestrator | 2025-05-19 14:44:37 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:37.212436 | orchestrator | 2025-05-19 14:44:37 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:37.212935 | orchestrator | 2025-05-19 14:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:40.269458 | orchestrator | 2025-05-19 14:44:40 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:40.269970 | orchestrator | 2025-05-19 14:44:40 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:40.271307 | orchestrator | 2025-05-19 14:44:40 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:40.271351 | orchestrator | 2025-05-19 14:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:43.314552 | orchestrator | 2025-05-19 14:44:43 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:43.315707 | orchestrator | 2025-05-19 14:44:43 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:43.317775 | orchestrator | 2025-05-19 14:44:43 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:43.317839 | orchestrator | 2025-05-19 14:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:46.371162 | orchestrator | 2025-05-19 14:44:46 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:46.372997 | orchestrator | 2025-05-19 14:44:46 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:46.375024 | orchestrator | 2025-05-19 14:44:46 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:46.375076 | orchestrator | 2025-05-19 14:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:49.422941 | orchestrator | 2025-05-19 14:44:49 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:49.424290 | orchestrator | 2025-05-19 14:44:49 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:49.426210 | orchestrator | 2025-05-19 14:44:49 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:49.426263 | orchestrator | 2025-05-19 14:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:52.479928 | orchestrator | 2025-05-19 14:44:52 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:52.481544 | orchestrator | 2025-05-19 14:44:52 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:52.483534 | orchestrator | 2025-05-19 14:44:52 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:52.483689 | orchestrator | 2025-05-19 14:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:55.540618 | orchestrator | 2025-05-19 14:44:55 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:55.540755 | orchestrator | 2025-05-19 14:44:55 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:55.541567 | orchestrator | 2025-05-19 14:44:55 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:55.541592 | orchestrator | 2025-05-19 14:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:44:58.595949 | orchestrator | 2025-05-19 14:44:58 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:44:58.599958 | orchestrator | 2025-05-19 14:44:58 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:44:58.599996 | orchestrator | 2025-05-19 14:44:58 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:44:58.600009 | orchestrator | 2025-05-19 14:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:01.651567 | orchestrator | 2025-05-19 14:45:01 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:45:01.653362 | orchestrator | 2025-05-19 14:45:01 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:01.655992 | orchestrator | 2025-05-19 14:45:01 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:01.656737 | orchestrator | 2025-05-19 14:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:04.708748 | orchestrator | 2025-05-19 14:45:04 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state STARTED 2025-05-19 14:45:04.710703 | orchestrator | 2025-05-19 14:45:04 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:04.713424 | orchestrator | 2025-05-19 14:45:04 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:04.713532 | orchestrator | 2025-05-19 14:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:07.774682 | orchestrator | 2025-05-19 14:45:07 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:07.780379 | orchestrator | 2025-05-19 14:45:07 | INFO  | Task da24b84a-bb0c-4b01-87b3-542158d3c936 is in state SUCCESS 2025-05-19 14:45:07.782837 | orchestrator | 2025-05-19 14:45:07.783134 | orchestrator | 2025-05-19 14:45:07.783168 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-19 14:45:07.783188 | orchestrator | 2025-05-19 14:45:07.783209 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-19 14:45:07.783230 | orchestrator | Monday 19 May 2025 14:34:18 +0000 (0:00:00.769) 0:00:00.769 ************ 2025-05-19 14:45:07.783252 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.783272 | orchestrator | 2025-05-19 14:45:07.783292 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-19 14:45:07.783378 | orchestrator | Monday 19 May 2025 14:34:19 +0000 (0:00:01.065) 0:00:01.834 ************ 2025-05-19 14:45:07.783401 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.783422 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.783443 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.783462 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.783482 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.783501 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.783519 | orchestrator | 2025-05-19 14:45:07.783612 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-19 14:45:07.783636 | orchestrator | Monday 19 May 2025 14:34:20 +0000 (0:00:01.474) 0:00:03.309 ************ 2025-05-19 14:45:07.783654 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.783672 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.783690 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.783709 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.783728 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.783743 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.783761 | orchestrator | 2025-05-19 14:45:07.783777 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-19 14:45:07.783795 | orchestrator | Monday 19 May 2025 14:34:21 +0000 (0:00:00.747) 0:00:04.057 ************ 2025-05-19 14:45:07.783812 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.783827 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.783843 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.783862 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.784020 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.784049 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.784073 | orchestrator | 2025-05-19 14:45:07.784109 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-19 14:45:07.784143 | orchestrator | Monday 19 May 2025 14:34:22 +0000 (0:00:00.995) 0:00:05.053 ************ 2025-05-19 14:45:07.784162 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.784181 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.784270 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.784290 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.784334 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.784353 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.784371 | orchestrator | 2025-05-19 14:45:07.784393 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-19 14:45:07.784412 | orchestrator | Monday 19 May 2025 14:34:23 +0000 (0:00:00.772) 0:00:05.825 ************ 2025-05-19 14:45:07.784431 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.784449 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.784466 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.784484 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.784503 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.784521 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.784538 | orchestrator | 2025-05-19 14:45:07.784557 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-19 14:45:07.784575 | orchestrator | Monday 19 May 2025 14:34:23 +0000 (0:00:00.489) 0:00:06.315 ************ 2025-05-19 14:45:07.784593 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.784611 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.784631 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.784649 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.784667 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.784687 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.784820 | orchestrator | 2025-05-19 14:45:07.784833 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 14:45:07.784853 | orchestrator | Monday 19 May 2025 14:34:24 +0000 (0:00:01.013) 0:00:07.329 ************ 2025-05-19 14:45:07.784872 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.784893 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.784911 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.784928 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.784965 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.784983 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.785000 | orchestrator | 2025-05-19 14:45:07.785018 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 14:45:07.785035 | orchestrator | Monday 19 May 2025 14:34:25 +0000 (0:00:00.820) 0:00:08.150 ************ 2025-05-19 14:45:07.785051 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.785067 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.785084 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.785103 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.785120 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.785136 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.785152 | orchestrator | 2025-05-19 14:45:07.785169 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 14:45:07.785186 | orchestrator | Monday 19 May 2025 14:34:26 +0000 (0:00:00.921) 0:00:09.071 ************ 2025-05-19 14:45:07.785202 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.785219 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.785236 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.785254 | orchestrator | 2025-05-19 14:45:07.785288 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-19 14:45:07.785305 | orchestrator | Monday 19 May 2025 14:34:27 +0000 (0:00:00.891) 0:00:09.963 ************ 2025-05-19 14:45:07.785369 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.785388 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.785404 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.785420 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.785437 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.785453 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.785469 | orchestrator | 2025-05-19 14:45:07.785508 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-19 14:45:07.785687 | orchestrator | Monday 19 May 2025 14:34:28 +0000 (0:00:01.349) 0:00:11.313 ************ 2025-05-19 14:45:07.785710 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.785726 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.785742 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.785755 | orchestrator | 2025-05-19 14:45:07.785772 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-19 14:45:07.785787 | orchestrator | Monday 19 May 2025 14:34:31 +0000 (0:00:02.850) 0:00:14.163 ************ 2025-05-19 14:45:07.785803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.785819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.785835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.785851 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.785867 | orchestrator | 2025-05-19 14:45:07.785884 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-19 14:45:07.785900 | orchestrator | Monday 19 May 2025 14:34:32 +0000 (0:00:00.615) 0:00:14.779 ************ 2025-05-19 14:45:07.785920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.785940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.785957 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.785989 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.786007 | orchestrator | 2025-05-19 14:45:07.786074 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-19 14:45:07.786085 | orchestrator | Monday 19 May 2025 14:34:33 +0000 (0:00:01.201) 0:00:15.980 ************ 2025-05-19 14:45:07.786097 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786120 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786130 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.786140 | orchestrator | 2025-05-19 14:45:07.786151 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-19 14:45:07.786163 | orchestrator | Monday 19 May 2025 14:34:33 +0000 (0:00:00.272) 0:00:16.253 ************ 2025-05-19 14:45:07.786185 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 14:34:29.641753', 'end': '2025-05-19 14:34:29.911032', 'delta': '0:00:00.269279', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 14:34:30.597348', 'end': '2025-05-19 14:34:30.858899', 'delta': '0:00:00.261551', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 14:34:31.336061', 'end': '2025-05-19 14:34:31.625613', 'delta': '0:00:00.289552', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.786245 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.786256 | orchestrator | 2025-05-19 14:45:07.786267 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-19 14:45:07.786278 | orchestrator | Monday 19 May 2025 14:34:34 +0000 (0:00:00.243) 0:00:16.497 ************ 2025-05-19 14:45:07.786289 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.786301 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.786496 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.786515 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.786529 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.786539 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.786548 | orchestrator | 2025-05-19 14:45:07.786558 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-19 14:45:07.786568 | orchestrator | Monday 19 May 2025 14:34:35 +0000 (0:00:01.261) 0:00:17.759 ************ 2025-05-19 14:45:07.786577 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.786586 | orchestrator | 2025-05-19 14:45:07.786596 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-19 14:45:07.786608 | orchestrator | Monday 19 May 2025 14:34:36 +0000 (0:00:00.759) 0:00:18.518 ************ 2025-05-19 14:45:07.786624 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.786640 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.786655 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.786673 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.786689 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.786704 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.786713 | orchestrator | 2025-05-19 14:45:07.786723 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-19 14:45:07.786733 | orchestrator | Monday 19 May 2025 14:34:37 +0000 (0:00:01.172) 0:00:19.690 ************ 2025-05-19 14:45:07.786744 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.786760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.786775 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.786791 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.786809 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.786825 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.786882 | orchestrator | 2025-05-19 14:45:07.786929 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 14:45:07.786940 | orchestrator | Monday 19 May 2025 14:34:38 +0000 (0:00:01.164) 0:00:20.855 ************ 2025-05-19 14:45:07.786992 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787007 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.787022 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.787038 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.787048 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.787057 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.787066 | orchestrator | 2025-05-19 14:45:07.787076 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-19 14:45:07.787126 | orchestrator | Monday 19 May 2025 14:34:39 +0000 (0:00:00.739) 0:00:21.594 ************ 2025-05-19 14:45:07.787138 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787155 | orchestrator | 2025-05-19 14:45:07.787172 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-19 14:45:07.787188 | orchestrator | Monday 19 May 2025 14:34:39 +0000 (0:00:00.117) 0:00:21.713 ************ 2025-05-19 14:45:07.787203 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787220 | orchestrator | 2025-05-19 14:45:07.787236 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 14:45:07.787252 | orchestrator | Monday 19 May 2025 14:34:39 +0000 (0:00:00.235) 0:00:21.948 ************ 2025-05-19 14:45:07.787273 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787283 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.787292 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.787302 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.787338 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.787355 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.787370 | orchestrator | 2025-05-19 14:45:07.787388 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-19 14:45:07.787415 | orchestrator | Monday 19 May 2025 14:34:40 +0000 (0:00:00.713) 0:00:22.661 ************ 2025-05-19 14:45:07.787432 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787448 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.787465 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.787481 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.787578 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.787598 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.787615 | orchestrator | 2025-05-19 14:45:07.787631 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-19 14:45:07.787648 | orchestrator | Monday 19 May 2025 14:34:41 +0000 (0:00:00.846) 0:00:23.508 ************ 2025-05-19 14:45:07.787733 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787755 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.787772 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.787789 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.787806 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.787824 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.787841 | orchestrator | 2025-05-19 14:45:07.787858 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-19 14:45:07.787878 | orchestrator | Monday 19 May 2025 14:34:41 +0000 (0:00:00.814) 0:00:24.322 ************ 2025-05-19 14:45:07.787895 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.787910 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.787920 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.787929 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.787939 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.787948 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.787959 | orchestrator | 2025-05-19 14:45:07.787976 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 14:45:07.788111 | orchestrator | Monday 19 May 2025 14:34:42 +0000 (0:00:00.934) 0:00:25.256 ************ 2025-05-19 14:45:07.788130 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.788144 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.788249 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.788264 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.788277 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.788291 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.788304 | orchestrator | 2025-05-19 14:45:07.788345 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-19 14:45:07.788362 | orchestrator | Monday 19 May 2025 14:34:43 +0000 (0:00:00.794) 0:00:26.051 ************ 2025-05-19 14:45:07.788379 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.788398 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.788457 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.788477 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.788495 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.788507 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.788517 | orchestrator | 2025-05-19 14:45:07.788527 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 14:45:07.788537 | orchestrator | Monday 19 May 2025 14:34:44 +0000 (0:00:00.717) 0:00:26.768 ************ 2025-05-19 14:45:07.788547 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.788556 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.788566 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.788586 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.788596 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.788605 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.788615 | orchestrator | 2025-05-19 14:45:07.788624 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-19 14:45:07.788634 | orchestrator | Monday 19 May 2025 14:34:45 +0000 (0:00:00.611) 0:00:27.380 ************ 2025-05-19 14:45:07.788645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.788784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.788796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788882 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.788905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.788929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.788939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.788990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789036 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.789047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f', 'dm-uuid-LVM-6XjVVGnIu5dfK03NqnV2FLRoxstuMusnG99v2bfLI3funxirDTVcA7D0I8z0Kks5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd', 'dm-uuid-LVM-s9yX6STbOcEYw0jykggC8wY1mdrtBgcLNGy1nnupdvMuFCX9Ez12c63i8zTG99hb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789145 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.789155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5', 'dm-uuid-LVM-SogVLv5AA1iwBc4y1xxdo7yUfHOfzqDLCfsjyHqaQVU5sFt0qrdjbqGcyvu8YH29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYQ1Ui-9zvQ-fjxX-66QV-fkvC-JTKz-e8FWrp', 'scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0', 'scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660', 'dm-uuid-LVM-r50SJW42xBIsxitZY0Vrid8wHWzvkHrTKt3Pg3cc1gIBl4KoEAalds8FVg26GTq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lscncc-A5cD-eljx-6h5C-Xk73-kXPo-y2jZjU', 'scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2', 'scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809', 'scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789505 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.789528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76', 'dm-uuid-LVM-6xlILYCsDgmXJUwznnA8gdmMneRu8jjdxjdLRJCHvX8zKbKkjGruy749r1Ul6j8k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24', 'dm-uuid-LVM-kyHMoxOUeHOOnPVhxZlIuw1obDjedo4W3Zd21TPzF1Lso8MAilmhfuIhJvlF2J2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:45:07.789718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K51oYj-rXRT-7pk7-S3cd-z0JP-s0Xf-jUtv0X', 'scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834', 'scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rB9Rm5-jHsC-jbcH-OYEr-kT22-vWtN-cRSTcD', 'scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738', 'scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAvDnF-xl55-Dn60-gmP5-X2Ty-dkRp-hCEb4M', 'scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538', 'scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb', 'scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789805 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.789814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QeHnBy-RQtO-xZd0-LcD5-L29s-TGP5-g3wY4z', 'scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964', 'scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a', 'scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:45:07.789852 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.789861 | orchestrator | 2025-05-19 14:45:07.789869 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 14:45:07.789877 | orchestrator | Monday 19 May 2025 14:34:46 +0000 (0:00:01.243) 0:00:28.624 ************ 2025-05-19 14:45:07.789904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789913 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789970 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.789988 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb6c5de0-1b22-4c77-a0bd-6caa2d18e501-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790060 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790072 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790080 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790088 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790096 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790105 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790140 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790149 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790159 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_99167c27-3ae4-4936-833c-d0be439dac7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790176 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790184 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.790198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790231 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790245 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790271 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790279 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_69160ba1-4fd3-4019-98a7-b22975faa0b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790300 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790325 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.790341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f', 'dm-uuid-LVM-6XjVVGnIu5dfK03NqnV2FLRoxstuMusnG99v2bfLI3funxirDTVcA7D0I8z0Kks5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd', 'dm-uuid-LVM-s9yX6STbOcEYw0jykggC8wY1mdrtBgcLNGy1nnupdvMuFCX9Ez12c63i8zTG99hb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790367 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.790375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5', 'dm-uuid-LVM-SogVLv5AA1iwBc4y1xxdo7yUfHOfzqDLCfsjyHqaQVU5sFt0qrdjbqGcyvu8YH29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790439 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660', 'dm-uuid-LVM-r50SJW42xBIsxitZY0Vrid8wHWzvkHrTKt3Pg3cc1gIBl4KoEAalds8FVg26GTq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYQ1Ui-9zvQ-fjxX-66QV-fkvC-JTKz-e8FWrp', 'scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0', 'scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lscncc-A5cD-eljx-6h5C-Xk73-kXPo-y2jZjU', 'scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2', 'scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809', 'scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790643 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.790651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76', 'dm-uuid-LVM-6xlILYCsDgmXJUwznnA8gdmMneRu8jjdxjdLRJCHvX8zKbKkjGruy749r1Ul6j8k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24', 'dm-uuid-LVM-kyHMoxOUeHOOnPVhxZlIuw1obDjedo4W3Zd21TPzF1Lso8MAilmhfuIhJvlF2J2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790726 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAvDnF-xl55-Dn60-gmP5-X2Ty-dkRp-hCEb4M', 'scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538', 'scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QeHnBy-RQtO-xZd0-LcD5-L29s-TGP5-g3wY4z', 'scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964', 'scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790761 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a', 'scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790803 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.790816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790848 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K51oYj-rXRT-7pk7-S3cd-z0JP-s0Xf-jUtv0X', 'scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834', 'scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rB9Rm5-jHsC-jbcH-OYEr-kT22-vWtN-cRSTcD', 'scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738', 'scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb', 'scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790910 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:45:07.790944 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.790953 | orchestrator | 2025-05-19 14:45:07.790961 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-19 14:45:07.790969 | orchestrator | Monday 19 May 2025 14:34:47 +0000 (0:00:01.643) 0:00:30.267 ************ 2025-05-19 14:45:07.790977 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.790986 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.790993 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.791006 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.791014 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.791022 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.791029 | orchestrator | 2025-05-19 14:45:07.791037 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-19 14:45:07.791045 | orchestrator | Monday 19 May 2025 14:34:49 +0000 (0:00:01.375) 0:00:31.643 ************ 2025-05-19 14:45:07.791053 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.791060 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.791068 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.791087 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.791095 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.791102 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.791110 | orchestrator | 2025-05-19 14:45:07.791118 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 14:45:07.791126 | orchestrator | Monday 19 May 2025 14:34:49 +0000 (0:00:00.677) 0:00:32.321 ************ 2025-05-19 14:45:07.791139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791147 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791154 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791162 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791170 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791177 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791185 | orchestrator | 2025-05-19 14:45:07.791193 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 14:45:07.791200 | orchestrator | Monday 19 May 2025 14:34:50 +0000 (0:00:00.976) 0:00:33.297 ************ 2025-05-19 14:45:07.791208 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791216 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791231 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791239 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791247 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791254 | orchestrator | 2025-05-19 14:45:07.791262 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 14:45:07.791270 | orchestrator | Monday 19 May 2025 14:34:51 +0000 (0:00:00.556) 0:00:33.854 ************ 2025-05-19 14:45:07.791278 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791285 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791293 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791300 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791325 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791333 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791341 | orchestrator | 2025-05-19 14:45:07.791349 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 14:45:07.791357 | orchestrator | Monday 19 May 2025 14:34:52 +0000 (0:00:00.674) 0:00:34.529 ************ 2025-05-19 14:45:07.791364 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791372 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791379 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791387 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791395 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791403 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791410 | orchestrator | 2025-05-19 14:45:07.791418 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-19 14:45:07.791426 | orchestrator | Monday 19 May 2025 14:34:53 +0000 (0:00:01.042) 0:00:35.572 ************ 2025-05-19 14:45:07.791434 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.791442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 14:45:07.791450 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-19 14:45:07.791457 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-19 14:45:07.791465 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-19 14:45:07.791473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 14:45:07.791481 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-19 14:45:07.791489 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-19 14:45:07.791496 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 14:45:07.791504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 14:45:07.791512 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-19 14:45:07.791519 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 14:45:07.791527 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 14:45:07.791535 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 14:45:07.791543 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 14:45:07.791550 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 14:45:07.791558 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 14:45:07.791566 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 14:45:07.791579 | orchestrator | 2025-05-19 14:45:07.791587 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-19 14:45:07.791594 | orchestrator | Monday 19 May 2025 14:34:56 +0000 (0:00:03.241) 0:00:38.813 ************ 2025-05-19 14:45:07.791602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.791610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.791618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.791626 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791633 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 14:45:07.791641 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 14:45:07.791652 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 14:45:07.791660 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 14:45:07.791676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 14:45:07.791683 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 14:45:07.791691 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 14:45:07.791712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 14:45:07.791719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 14:45:07.791727 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 14:45:07.791743 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 14:45:07.791750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 14:45:07.791758 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791766 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 14:45:07.791774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 14:45:07.791781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 14:45:07.791789 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791797 | orchestrator | 2025-05-19 14:45:07.791804 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-19 14:45:07.791812 | orchestrator | Monday 19 May 2025 14:34:57 +0000 (0:00:00.631) 0:00:39.445 ************ 2025-05-19 14:45:07.791820 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.791827 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.791835 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.791843 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.791851 | orchestrator | 2025-05-19 14:45:07.791859 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 14:45:07.791867 | orchestrator | Monday 19 May 2025 14:34:58 +0000 (0:00:01.404) 0:00:40.851 ************ 2025-05-19 14:45:07.791875 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791883 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791890 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791898 | orchestrator | 2025-05-19 14:45:07.791906 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 14:45:07.791914 | orchestrator | Monday 19 May 2025 14:34:58 +0000 (0:00:00.407) 0:00:41.259 ************ 2025-05-19 14:45:07.791922 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791929 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791937 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791945 | orchestrator | 2025-05-19 14:45:07.791952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 14:45:07.791965 | orchestrator | Monday 19 May 2025 14:34:59 +0000 (0:00:00.418) 0:00:41.677 ************ 2025-05-19 14:45:07.791973 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.791980 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.791988 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.791996 | orchestrator | 2025-05-19 14:45:07.792003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 14:45:07.792011 | orchestrator | Monday 19 May 2025 14:34:59 +0000 (0:00:00.294) 0:00:41.972 ************ 2025-05-19 14:45:07.792019 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.792027 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.792035 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.792042 | orchestrator | 2025-05-19 14:45:07.792050 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 14:45:07.792058 | orchestrator | Monday 19 May 2025 14:35:00 +0000 (0:00:00.382) 0:00:42.354 ************ 2025-05-19 14:45:07.792065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.792073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.792081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.792089 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792096 | orchestrator | 2025-05-19 14:45:07.792104 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 14:45:07.792112 | orchestrator | Monday 19 May 2025 14:35:00 +0000 (0:00:00.528) 0:00:42.883 ************ 2025-05-19 14:45:07.792120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.792127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.792135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.792143 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792150 | orchestrator | 2025-05-19 14:45:07.792158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 14:45:07.792166 | orchestrator | Monday 19 May 2025 14:35:00 +0000 (0:00:00.439) 0:00:43.323 ************ 2025-05-19 14:45:07.792174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.792181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.792189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.792197 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792204 | orchestrator | 2025-05-19 14:45:07.792212 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 14:45:07.792220 | orchestrator | Monday 19 May 2025 14:35:01 +0000 (0:00:00.470) 0:00:43.793 ************ 2025-05-19 14:45:07.792227 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.792235 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.792243 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.792250 | orchestrator | 2025-05-19 14:45:07.792258 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 14:45:07.792269 | orchestrator | Monday 19 May 2025 14:35:01 +0000 (0:00:00.481) 0:00:44.275 ************ 2025-05-19 14:45:07.792277 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 14:45:07.792285 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 14:45:07.792293 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 14:45:07.792300 | orchestrator | 2025-05-19 14:45:07.792324 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-19 14:45:07.792332 | orchestrator | Monday 19 May 2025 14:35:02 +0000 (0:00:00.999) 0:00:45.274 ************ 2025-05-19 14:45:07.792345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.792353 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.792361 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.792369 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 14:45:07.792383 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 14:45:07.792391 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 14:45:07.792398 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 14:45:07.792406 | orchestrator | 2025-05-19 14:45:07.792414 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-19 14:45:07.792422 | orchestrator | Monday 19 May 2025 14:35:04 +0000 (0:00:01.398) 0:00:46.673 ************ 2025-05-19 14:45:07.792430 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.792438 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.792446 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.792454 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 14:45:07.792461 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 14:45:07.792469 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 14:45:07.792477 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 14:45:07.792485 | orchestrator | 2025-05-19 14:45:07.792493 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.792500 | orchestrator | Monday 19 May 2025 14:35:06 +0000 (0:00:02.288) 0:00:48.961 ************ 2025-05-19 14:45:07.792509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.792518 | orchestrator | 2025-05-19 14:45:07.792526 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.792533 | orchestrator | Monday 19 May 2025 14:35:07 +0000 (0:00:01.224) 0:00:50.185 ************ 2025-05-19 14:45:07.792541 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.792549 | orchestrator | 2025-05-19 14:45:07.792557 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.792565 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:01.213) 0:00:51.398 ************ 2025-05-19 14:45:07.792573 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792580 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.792588 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.792596 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.792604 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.792611 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.792619 | orchestrator | 2025-05-19 14:45:07.792627 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.792635 | orchestrator | Monday 19 May 2025 14:35:09 +0000 (0:00:00.622) 0:00:52.021 ************ 2025-05-19 14:45:07.792642 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.792650 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.792658 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.792666 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.792673 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.792681 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.792689 | orchestrator | 2025-05-19 14:45:07.792697 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.792704 | orchestrator | Monday 19 May 2025 14:35:11 +0000 (0:00:01.511) 0:00:53.532 ************ 2025-05-19 14:45:07.792712 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.792720 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.792728 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.792735 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.792748 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.792756 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.792774 | orchestrator | 2025-05-19 14:45:07.792783 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.792791 | orchestrator | Monday 19 May 2025 14:35:12 +0000 (0:00:01.142) 0:00:54.674 ************ 2025-05-19 14:45:07.792798 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.792806 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.792814 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.792822 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.792830 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.792838 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.792845 | orchestrator | 2025-05-19 14:45:07.792853 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.792861 | orchestrator | Monday 19 May 2025 14:35:13 +0000 (0:00:01.354) 0:00:56.029 ************ 2025-05-19 14:45:07.792869 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792880 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.792888 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.792896 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.792904 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.792912 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.792919 | orchestrator | 2025-05-19 14:45:07.792927 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.792935 | orchestrator | Monday 19 May 2025 14:35:14 +0000 (0:00:00.821) 0:00:56.851 ************ 2025-05-19 14:45:07.792948 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.792956 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.792964 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.792971 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.792979 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.792987 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.792995 | orchestrator | 2025-05-19 14:45:07.793002 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.793010 | orchestrator | Monday 19 May 2025 14:35:15 +0000 (0:00:00.922) 0:00:57.773 ************ 2025-05-19 14:45:07.793018 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793026 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793033 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793041 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793049 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793056 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793064 | orchestrator | 2025-05-19 14:45:07.793072 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.793080 | orchestrator | Monday 19 May 2025 14:35:16 +0000 (0:00:00.786) 0:00:58.560 ************ 2025-05-19 14:45:07.793088 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793095 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793103 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793111 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793119 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793126 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793134 | orchestrator | 2025-05-19 14:45:07.793142 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.793149 | orchestrator | Monday 19 May 2025 14:35:17 +0000 (0:00:01.115) 0:00:59.675 ************ 2025-05-19 14:45:07.793157 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793165 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793173 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793180 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793188 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793196 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793203 | orchestrator | 2025-05-19 14:45:07.793211 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.793224 | orchestrator | Monday 19 May 2025 14:35:18 +0000 (0:00:01.471) 0:01:01.146 ************ 2025-05-19 14:45:07.793232 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793239 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793247 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793255 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793262 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793270 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793278 | orchestrator | 2025-05-19 14:45:07.793286 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.793293 | orchestrator | Monday 19 May 2025 14:35:19 +0000 (0:00:00.887) 0:01:02.034 ************ 2025-05-19 14:45:07.793301 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793363 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793373 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793381 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793389 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793397 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793404 | orchestrator | 2025-05-19 14:45:07.793412 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.793420 | orchestrator | Monday 19 May 2025 14:35:21 +0000 (0:00:01.352) 0:01:03.386 ************ 2025-05-19 14:45:07.793428 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793435 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793443 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793451 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793458 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793466 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793474 | orchestrator | 2025-05-19 14:45:07.793481 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.793489 | orchestrator | Monday 19 May 2025 14:35:21 +0000 (0:00:00.650) 0:01:04.037 ************ 2025-05-19 14:45:07.793497 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793504 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793512 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793520 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793527 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793535 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793543 | orchestrator | 2025-05-19 14:45:07.793551 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.793559 | orchestrator | Monday 19 May 2025 14:35:22 +0000 (0:00:00.759) 0:01:04.796 ************ 2025-05-19 14:45:07.793566 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793574 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793589 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793597 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793605 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793612 | orchestrator | 2025-05-19 14:45:07.793620 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.793628 | orchestrator | Monday 19 May 2025 14:35:23 +0000 (0:00:00.629) 0:01:05.426 ************ 2025-05-19 14:45:07.793636 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793644 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793651 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793659 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793667 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793674 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793682 | orchestrator | 2025-05-19 14:45:07.793689 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.793697 | orchestrator | Monday 19 May 2025 14:35:23 +0000 (0:00:00.710) 0:01:06.137 ************ 2025-05-19 14:45:07.793709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.793717 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.793730 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.793737 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793745 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793753 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793760 | orchestrator | 2025-05-19 14:45:07.793768 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.793781 | orchestrator | Monday 19 May 2025 14:35:24 +0000 (0:00:00.847) 0:01:06.985 ************ 2025-05-19 14:45:07.793789 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793797 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793805 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793813 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.793820 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.793828 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.793836 | orchestrator | 2025-05-19 14:45:07.793844 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.793852 | orchestrator | Monday 19 May 2025 14:35:26 +0000 (0:00:01.441) 0:01:08.426 ************ 2025-05-19 14:45:07.793860 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793867 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793875 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793883 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793891 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793898 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793906 | orchestrator | 2025-05-19 14:45:07.793914 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.793922 | orchestrator | Monday 19 May 2025 14:35:26 +0000 (0:00:00.780) 0:01:09.207 ************ 2025-05-19 14:45:07.793929 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.793937 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.793945 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.793953 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.793960 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.793968 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.793975 | orchestrator | 2025-05-19 14:45:07.793983 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-19 14:45:07.793991 | orchestrator | Monday 19 May 2025 14:35:28 +0000 (0:00:01.464) 0:01:10.671 ************ 2025-05-19 14:45:07.793999 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.794007 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.794014 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.794181 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.794189 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.794209 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.794228 | orchestrator | 2025-05-19 14:45:07.794236 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-19 14:45:07.794244 | orchestrator | Monday 19 May 2025 14:35:30 +0000 (0:00:01.754) 0:01:12.426 ************ 2025-05-19 14:45:07.794252 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.794260 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.794267 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.794275 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.794282 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.794290 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.794298 | orchestrator | 2025-05-19 14:45:07.794305 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-19 14:45:07.794372 | orchestrator | Monday 19 May 2025 14:35:31 +0000 (0:00:01.852) 0:01:14.278 ************ 2025-05-19 14:45:07.794380 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.794388 | orchestrator | 2025-05-19 14:45:07.794396 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-19 14:45:07.794404 | orchestrator | Monday 19 May 2025 14:35:33 +0000 (0:00:01.098) 0:01:15.377 ************ 2025-05-19 14:45:07.794422 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.794430 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.794438 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.794446 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.794453 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.794461 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.794469 | orchestrator | 2025-05-19 14:45:07.794477 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-19 14:45:07.794485 | orchestrator | Monday 19 May 2025 14:35:33 +0000 (0:00:00.730) 0:01:16.108 ************ 2025-05-19 14:45:07.794492 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.794500 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.794508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.794515 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.794523 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.794530 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.794538 | orchestrator | 2025-05-19 14:45:07.794546 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-19 14:45:07.794554 | orchestrator | Monday 19 May 2025 14:35:34 +0000 (0:00:00.630) 0:01:16.738 ************ 2025-05-19 14:45:07.794561 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794569 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794577 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794585 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794592 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794600 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794608 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794616 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794628 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 14:45:07.794636 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794644 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794652 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 14:45:07.794660 | orchestrator | 2025-05-19 14:45:07.794702 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-19 14:45:07.794711 | orchestrator | Monday 19 May 2025 14:35:35 +0000 (0:00:01.557) 0:01:18.296 ************ 2025-05-19 14:45:07.794719 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.794727 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.794734 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.794742 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.794750 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.794758 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.794766 | orchestrator | 2025-05-19 14:45:07.794773 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-19 14:45:07.794783 | orchestrator | Monday 19 May 2025 14:35:36 +0000 (0:00:00.846) 0:01:19.142 ************ 2025-05-19 14:45:07.794792 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.794800 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.794809 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.794818 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.794827 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.794835 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.794844 | orchestrator | 2025-05-19 14:45:07.794852 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-19 14:45:07.794867 | orchestrator | Monday 19 May 2025 14:35:37 +0000 (0:00:00.770) 0:01:19.913 ************ 2025-05-19 14:45:07.794876 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.794885 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.794894 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.794902 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.794911 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.794920 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.794929 | orchestrator | 2025-05-19 14:45:07.794937 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-19 14:45:07.794946 | orchestrator | Monday 19 May 2025 14:35:38 +0000 (0:00:00.606) 0:01:20.519 ************ 2025-05-19 14:45:07.794955 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.794963 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.794972 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.794981 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.794990 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.794998 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795007 | orchestrator | 2025-05-19 14:45:07.795015 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-19 14:45:07.795025 | orchestrator | Monday 19 May 2025 14:35:38 +0000 (0:00:00.796) 0:01:21.316 ************ 2025-05-19 14:45:07.795034 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.795043 | orchestrator | 2025-05-19 14:45:07.795052 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-19 14:45:07.795061 | orchestrator | Monday 19 May 2025 14:35:40 +0000 (0:00:01.307) 0:01:22.624 ************ 2025-05-19 14:45:07.795070 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.795079 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.795088 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.795097 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.795106 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.795114 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.795123 | orchestrator | 2025-05-19 14:45:07.795132 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-19 14:45:07.795141 | orchestrator | Monday 19 May 2025 14:36:55 +0000 (0:01:15.042) 0:02:37.666 ************ 2025-05-19 14:45:07.795151 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795158 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795166 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795174 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795182 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795189 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795197 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795205 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795213 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795221 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795229 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795244 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795252 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795260 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795273 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795281 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795289 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795301 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795324 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795332 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 14:45:07.795340 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 14:45:07.795348 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 14:45:07.795381 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795390 | orchestrator | 2025-05-19 14:45:07.795398 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-19 14:45:07.795406 | orchestrator | Monday 19 May 2025 14:36:56 +0000 (0:00:01.368) 0:02:39.034 ************ 2025-05-19 14:45:07.795414 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795422 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795429 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795437 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795445 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795453 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795460 | orchestrator | 2025-05-19 14:45:07.795468 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-19 14:45:07.795476 | orchestrator | Monday 19 May 2025 14:36:57 +0000 (0:00:00.733) 0:02:39.768 ************ 2025-05-19 14:45:07.795484 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795492 | orchestrator | 2025-05-19 14:45:07.795499 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-19 14:45:07.795507 | orchestrator | Monday 19 May 2025 14:36:57 +0000 (0:00:00.181) 0:02:39.949 ************ 2025-05-19 14:45:07.795515 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795523 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795530 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795538 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795546 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795554 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795561 | orchestrator | 2025-05-19 14:45:07.795569 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-19 14:45:07.795577 | orchestrator | Monday 19 May 2025 14:36:58 +0000 (0:00:00.982) 0:02:40.932 ************ 2025-05-19 14:45:07.795585 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795593 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795600 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795608 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795616 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795624 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795631 | orchestrator | 2025-05-19 14:45:07.795639 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-19 14:45:07.795647 | orchestrator | Monday 19 May 2025 14:36:59 +0000 (0:00:00.745) 0:02:41.678 ************ 2025-05-19 14:45:07.795655 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795662 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795670 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795678 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795686 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795693 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795701 | orchestrator | 2025-05-19 14:45:07.795709 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-19 14:45:07.795717 | orchestrator | Monday 19 May 2025 14:37:00 +0000 (0:00:00.738) 0:02:42.417 ************ 2025-05-19 14:45:07.795725 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.795738 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.795746 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.795754 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.795761 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.795769 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.795777 | orchestrator | 2025-05-19 14:45:07.795784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-19 14:45:07.795792 | orchestrator | Monday 19 May 2025 14:37:02 +0000 (0:00:02.221) 0:02:44.638 ************ 2025-05-19 14:45:07.795800 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.795808 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.795815 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.795823 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.795831 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.795838 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.795846 | orchestrator | 2025-05-19 14:45:07.795854 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-19 14:45:07.795861 | orchestrator | Monday 19 May 2025 14:37:03 +0000 (0:00:00.799) 0:02:45.438 ************ 2025-05-19 14:45:07.795870 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.795879 | orchestrator | 2025-05-19 14:45:07.795887 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-19 14:45:07.795895 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:00.984) 0:02:46.422 ************ 2025-05-19 14:45:07.795902 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795910 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795918 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795925 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.795933 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.795941 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.795948 | orchestrator | 2025-05-19 14:45:07.795956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-19 14:45:07.795964 | orchestrator | Monday 19 May 2025 14:37:04 +0000 (0:00:00.624) 0:02:47.047 ************ 2025-05-19 14:45:07.795972 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.795979 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.795987 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.795994 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796002 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796010 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796017 | orchestrator | 2025-05-19 14:45:07.796025 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-19 14:45:07.796037 | orchestrator | Monday 19 May 2025 14:37:05 +0000 (0:00:00.641) 0:02:47.689 ************ 2025-05-19 14:45:07.796045 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796053 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796061 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796068 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796076 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796083 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796091 | orchestrator | 2025-05-19 14:45:07.796099 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-19 14:45:07.796130 | orchestrator | Monday 19 May 2025 14:37:05 +0000 (0:00:00.474) 0:02:48.164 ************ 2025-05-19 14:45:07.796139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796147 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796155 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796162 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796170 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796178 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796186 | orchestrator | 2025-05-19 14:45:07.796193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-19 14:45:07.796207 | orchestrator | Monday 19 May 2025 14:37:06 +0000 (0:00:00.689) 0:02:48.853 ************ 2025-05-19 14:45:07.796214 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796222 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796230 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796238 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796246 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796253 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796261 | orchestrator | 2025-05-19 14:45:07.796269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-19 14:45:07.796277 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:00.678) 0:02:49.532 ************ 2025-05-19 14:45:07.796284 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796292 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796300 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796307 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796328 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796336 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796344 | orchestrator | 2025-05-19 14:45:07.796351 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-19 14:45:07.796359 | orchestrator | Monday 19 May 2025 14:37:07 +0000 (0:00:00.663) 0:02:50.195 ************ 2025-05-19 14:45:07.796367 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796375 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796382 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796390 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796398 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796405 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796413 | orchestrator | 2025-05-19 14:45:07.796421 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-19 14:45:07.796429 | orchestrator | Monday 19 May 2025 14:37:08 +0000 (0:00:00.502) 0:02:50.698 ************ 2025-05-19 14:45:07.796437 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.796444 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.796452 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.796460 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.796467 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.796475 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.796483 | orchestrator | 2025-05-19 14:45:07.796491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-19 14:45:07.796499 | orchestrator | Monday 19 May 2025 14:37:09 +0000 (0:00:00.669) 0:02:51.368 ************ 2025-05-19 14:45:07.796507 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.796516 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.796528 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.796541 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.796549 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.796557 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.796564 | orchestrator | 2025-05-19 14:45:07.796572 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-19 14:45:07.796580 | orchestrator | Monday 19 May 2025 14:37:10 +0000 (0:00:00.984) 0:02:52.353 ************ 2025-05-19 14:45:07.796588 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.796596 | orchestrator | 2025-05-19 14:45:07.796604 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-19 14:45:07.796612 | orchestrator | Monday 19 May 2025 14:37:10 +0000 (0:00:00.970) 0:02:53.323 ************ 2025-05-19 14:45:07.796619 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-19 14:45:07.796627 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-19 14:45:07.796635 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-19 14:45:07.796648 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-19 14:45:07.796656 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-19 14:45:07.796664 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796672 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-19 14:45:07.796679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796687 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796710 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796718 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-19 14:45:07.796726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796734 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796742 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796749 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796757 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796773 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-19 14:45:07.796781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796821 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796829 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796837 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-19 14:45:07.796845 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796852 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796860 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796868 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796883 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-19 14:45:07.796891 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796906 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796914 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796922 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-19 14:45:07.796929 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796945 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796953 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-19 14:45:07.796976 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.796983 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.797009 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.797017 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.797025 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.797038 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-19 14:45:07.797046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797054 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797061 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797077 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797092 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 14:45:07.797100 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797107 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797115 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797123 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797131 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 14:45:07.797138 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797146 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797154 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797161 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 14:45:07.797234 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797250 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797266 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797273 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 14:45:07.797281 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797289 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797296 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797364 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797373 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 14:45:07.797389 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797424 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797433 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797441 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797448 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797456 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 14:45:07.797464 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-19 14:45:07.797472 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-19 14:45:07.797491 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-19 14:45:07.797499 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-19 14:45:07.797507 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-19 14:45:07.797515 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-19 14:45:07.797523 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-19 14:45:07.797531 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-19 14:45:07.797538 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-19 14:45:07.797546 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-19 14:45:07.797554 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-19 14:45:07.797562 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-19 14:45:07.797569 | orchestrator | 2025-05-19 14:45:07.797577 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 14:45:07.797585 | orchestrator | Monday 19 May 2025 14:37:17 +0000 (0:00:06.277) 0:02:59.601 ************ 2025-05-19 14:45:07.797593 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.797601 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.797609 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.797617 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.797625 | orchestrator | 2025-05-19 14:45:07.797633 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-19 14:45:07.797641 | orchestrator | Monday 19 May 2025 14:37:17 +0000 (0:00:00.719) 0:03:00.320 ************ 2025-05-19 14:45:07.797649 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797658 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797666 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797674 | orchestrator | 2025-05-19 14:45:07.797682 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-19 14:45:07.797689 | orchestrator | Monday 19 May 2025 14:37:18 +0000 (0:00:00.658) 0:03:00.979 ************ 2025-05-19 14:45:07.797697 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797705 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797713 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.797721 | orchestrator | 2025-05-19 14:45:07.797729 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-19 14:45:07.797736 | orchestrator | Monday 19 May 2025 14:37:19 +0000 (0:00:01.233) 0:03:02.212 ************ 2025-05-19 14:45:07.797744 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.797752 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.797760 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.797768 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.797775 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.797783 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.797791 | orchestrator | 2025-05-19 14:45:07.797799 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-19 14:45:07.797806 | orchestrator | Monday 19 May 2025 14:37:20 +0000 (0:00:00.562) 0:03:02.774 ************ 2025-05-19 14:45:07.797814 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.797822 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.797830 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.797843 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.797851 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.797858 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.797866 | orchestrator | 2025-05-19 14:45:07.797874 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-19 14:45:07.797882 | orchestrator | Monday 19 May 2025 14:37:21 +0000 (0:00:00.746) 0:03:03.520 ************ 2025-05-19 14:45:07.797889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.797897 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.797905 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.797913 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.797924 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.797932 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.797940 | orchestrator | 2025-05-19 14:45:07.797948 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-19 14:45:07.797956 | orchestrator | Monday 19 May 2025 14:37:21 +0000 (0:00:00.469) 0:03:03.990 ************ 2025-05-19 14:45:07.797964 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.797972 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798001 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798010 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798041 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798051 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798058 | orchestrator | 2025-05-19 14:45:07.798066 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-19 14:45:07.798074 | orchestrator | Monday 19 May 2025 14:37:22 +0000 (0:00:00.575) 0:03:04.566 ************ 2025-05-19 14:45:07.798082 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798089 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798097 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798105 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798112 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798120 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798128 | orchestrator | 2025-05-19 14:45:07.798136 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 14:45:07.798144 | orchestrator | Monday 19 May 2025 14:37:22 +0000 (0:00:00.485) 0:03:05.052 ************ 2025-05-19 14:45:07.798151 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798159 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798167 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798174 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798182 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798189 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798197 | orchestrator | 2025-05-19 14:45:07.798205 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 14:45:07.798213 | orchestrator | Monday 19 May 2025 14:37:23 +0000 (0:00:00.657) 0:03:05.710 ************ 2025-05-19 14:45:07.798254 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798263 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798271 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798278 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798286 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798294 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798301 | orchestrator | 2025-05-19 14:45:07.798326 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 14:45:07.798335 | orchestrator | Monday 19 May 2025 14:37:23 +0000 (0:00:00.529) 0:03:06.239 ************ 2025-05-19 14:45:07.798342 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798350 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798358 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798365 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798383 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798390 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798398 | orchestrator | 2025-05-19 14:45:07.798406 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 14:45:07.798414 | orchestrator | Monday 19 May 2025 14:37:24 +0000 (0:00:00.568) 0:03:06.808 ************ 2025-05-19 14:45:07.798421 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798429 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798437 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798445 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.798452 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.798460 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.798468 | orchestrator | 2025-05-19 14:45:07.798476 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-19 14:45:07.798484 | orchestrator | Monday 19 May 2025 14:37:28 +0000 (0:00:04.288) 0:03:11.096 ************ 2025-05-19 14:45:07.798491 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798499 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798507 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798514 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.798522 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.798530 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.798538 | orchestrator | 2025-05-19 14:45:07.798546 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-19 14:45:07.798553 | orchestrator | Monday 19 May 2025 14:37:29 +0000 (0:00:00.979) 0:03:12.076 ************ 2025-05-19 14:45:07.798561 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798569 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798577 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798584 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.798592 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.798600 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.798607 | orchestrator | 2025-05-19 14:45:07.798615 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-19 14:45:07.798623 | orchestrator | Monday 19 May 2025 14:37:30 +0000 (0:00:00.726) 0:03:12.802 ************ 2025-05-19 14:45:07.798631 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798638 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798646 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798654 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798662 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798669 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798677 | orchestrator | 2025-05-19 14:45:07.798685 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-19 14:45:07.798692 | orchestrator | Monday 19 May 2025 14:37:31 +0000 (0:00:00.882) 0:03:13.685 ************ 2025-05-19 14:45:07.798700 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798708 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798715 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798723 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.798736 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.798744 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.798752 | orchestrator | 2025-05-19 14:45:07.798760 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-19 14:45:07.798795 | orchestrator | Monday 19 May 2025 14:37:32 +0000 (0:00:00.674) 0:03:14.360 ************ 2025-05-19 14:45:07.798805 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798813 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798820 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798835 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-19 14:45:07.798844 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-19 14:45:07.798854 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798862 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-19 14:45:07.798870 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-19 14:45:07.798878 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798886 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-19 14:45:07.798894 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-19 14:45:07.798902 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798909 | orchestrator | 2025-05-19 14:45:07.798917 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-19 14:45:07.798925 | orchestrator | Monday 19 May 2025 14:37:32 +0000 (0:00:00.962) 0:03:15.322 ************ 2025-05-19 14:45:07.798933 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.798941 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.798948 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.798956 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.798964 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.798971 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.798979 | orchestrator | 2025-05-19 14:45:07.798987 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-19 14:45:07.798995 | orchestrator | Monday 19 May 2025 14:37:33 +0000 (0:00:00.819) 0:03:16.141 ************ 2025-05-19 14:45:07.799002 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799010 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799018 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799025 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.799033 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.799041 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.799048 | orchestrator | 2025-05-19 14:45:07.799057 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 14:45:07.799064 | orchestrator | Monday 19 May 2025 14:37:34 +0000 (0:00:00.639) 0:03:16.780 ************ 2025-05-19 14:45:07.799072 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799080 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799087 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799095 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.799108 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.799116 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.799123 | orchestrator | 2025-05-19 14:45:07.799131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 14:45:07.799139 | orchestrator | Monday 19 May 2025 14:37:34 +0000 (0:00:00.449) 0:03:17.230 ************ 2025-05-19 14:45:07.799147 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799154 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799162 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799170 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.799177 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.799189 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.799197 | orchestrator | 2025-05-19 14:45:07.799204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 14:45:07.799212 | orchestrator | Monday 19 May 2025 14:37:35 +0000 (0:00:00.691) 0:03:17.922 ************ 2025-05-19 14:45:07.799220 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799228 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799265 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.799275 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.799283 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.799290 | orchestrator | 2025-05-19 14:45:07.799298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 14:45:07.799306 | orchestrator | Monday 19 May 2025 14:37:36 +0000 (0:00:00.546) 0:03:18.469 ************ 2025-05-19 14:45:07.799331 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799339 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799347 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799355 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.799363 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.799371 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.799378 | orchestrator | 2025-05-19 14:45:07.799386 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 14:45:07.799394 | orchestrator | Monday 19 May 2025 14:37:37 +0000 (0:00:00.983) 0:03:19.453 ************ 2025-05-19 14:45:07.799402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 14:45:07.799410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 14:45:07.799418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 14:45:07.799426 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799433 | orchestrator | 2025-05-19 14:45:07.799441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 14:45:07.799449 | orchestrator | Monday 19 May 2025 14:37:37 +0000 (0:00:00.392) 0:03:19.846 ************ 2025-05-19 14:45:07.799456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 14:45:07.799464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 14:45:07.799472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 14:45:07.799480 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799487 | orchestrator | 2025-05-19 14:45:07.799495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 14:45:07.799503 | orchestrator | Monday 19 May 2025 14:37:37 +0000 (0:00:00.357) 0:03:20.203 ************ 2025-05-19 14:45:07.799511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 14:45:07.799518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 14:45:07.799526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 14:45:07.799534 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799541 | orchestrator | 2025-05-19 14:45:07.799549 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 14:45:07.799557 | orchestrator | Monday 19 May 2025 14:37:38 +0000 (0:00:00.385) 0:03:20.591 ************ 2025-05-19 14:45:07.799570 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799578 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799586 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799593 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.799601 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.799609 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.799617 | orchestrator | 2025-05-19 14:45:07.799624 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 14:45:07.799632 | orchestrator | Monday 19 May 2025 14:37:38 +0000 (0:00:00.578) 0:03:21.169 ************ 2025-05-19 14:45:07.799640 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 14:45:07.799648 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.799656 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 14:45:07.799664 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.799671 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 14:45:07.799679 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.799687 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 14:45:07.799694 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 14:45:07.799702 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 14:45:07.799710 | orchestrator | 2025-05-19 14:45:07.799718 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-19 14:45:07.799725 | orchestrator | Monday 19 May 2025 14:37:40 +0000 (0:00:01.640) 0:03:22.810 ************ 2025-05-19 14:45:07.799733 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.799741 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.799749 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.799756 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.799764 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.799771 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.799779 | orchestrator | 2025-05-19 14:45:07.799787 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.799795 | orchestrator | Monday 19 May 2025 14:37:42 +0000 (0:00:02.347) 0:03:25.157 ************ 2025-05-19 14:45:07.799803 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.799810 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.799818 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.799826 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.799833 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.799841 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.799849 | orchestrator | 2025-05-19 14:45:07.799857 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-19 14:45:07.799864 | orchestrator | Monday 19 May 2025 14:37:43 +0000 (0:00:00.973) 0:03:26.130 ************ 2025-05-19 14:45:07.799872 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.799880 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.799888 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.799896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.799904 | orchestrator | 2025-05-19 14:45:07.799916 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-19 14:45:07.799924 | orchestrator | Monday 19 May 2025 14:37:44 +0000 (0:00:00.844) 0:03:26.975 ************ 2025-05-19 14:45:07.799932 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.799940 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.799947 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.799955 | orchestrator | 2025-05-19 14:45:07.799963 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-19 14:45:07.799994 | orchestrator | Monday 19 May 2025 14:37:44 +0000 (0:00:00.339) 0:03:27.314 ************ 2025-05-19 14:45:07.800004 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.800012 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.800019 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.800033 | orchestrator | 2025-05-19 14:45:07.800041 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-19 14:45:07.800049 | orchestrator | Monday 19 May 2025 14:37:46 +0000 (0:00:01.618) 0:03:28.932 ************ 2025-05-19 14:45:07.800057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.800065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.800073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.800081 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.800088 | orchestrator | 2025-05-19 14:45:07.800096 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-19 14:45:07.800104 | orchestrator | Monday 19 May 2025 14:37:47 +0000 (0:00:00.630) 0:03:29.562 ************ 2025-05-19 14:45:07.800112 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.800119 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.800127 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.800135 | orchestrator | 2025-05-19 14:45:07.800143 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-19 14:45:07.800150 | orchestrator | Monday 19 May 2025 14:37:47 +0000 (0:00:00.310) 0:03:29.872 ************ 2025-05-19 14:45:07.800158 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.800166 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.800174 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.800181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.800189 | orchestrator | 2025-05-19 14:45:07.800197 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-19 14:45:07.800205 | orchestrator | Monday 19 May 2025 14:37:48 +0000 (0:00:00.945) 0:03:30.817 ************ 2025-05-19 14:45:07.800213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.800220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.800228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.800236 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800244 | orchestrator | 2025-05-19 14:45:07.800252 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-19 14:45:07.800259 | orchestrator | Monday 19 May 2025 14:37:48 +0000 (0:00:00.409) 0:03:31.227 ************ 2025-05-19 14:45:07.800267 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800275 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.800283 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.800290 | orchestrator | 2025-05-19 14:45:07.800298 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-19 14:45:07.800306 | orchestrator | Monday 19 May 2025 14:37:49 +0000 (0:00:00.296) 0:03:31.524 ************ 2025-05-19 14:45:07.800329 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800337 | orchestrator | 2025-05-19 14:45:07.800345 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-19 14:45:07.800353 | orchestrator | Monday 19 May 2025 14:37:49 +0000 (0:00:00.203) 0:03:31.728 ************ 2025-05-19 14:45:07.800360 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800368 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.800376 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.800384 | orchestrator | 2025-05-19 14:45:07.800392 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-19 14:45:07.800399 | orchestrator | Monday 19 May 2025 14:37:49 +0000 (0:00:00.304) 0:03:32.032 ************ 2025-05-19 14:45:07.800407 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800415 | orchestrator | 2025-05-19 14:45:07.800423 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-19 14:45:07.800430 | orchestrator | Monday 19 May 2025 14:37:49 +0000 (0:00:00.206) 0:03:32.239 ************ 2025-05-19 14:45:07.800438 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800451 | orchestrator | 2025-05-19 14:45:07.800459 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-19 14:45:07.800466 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:00.215) 0:03:32.454 ************ 2025-05-19 14:45:07.800474 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800482 | orchestrator | 2025-05-19 14:45:07.800489 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-19 14:45:07.800497 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:00.335) 0:03:32.789 ************ 2025-05-19 14:45:07.800505 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800512 | orchestrator | 2025-05-19 14:45:07.800520 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-19 14:45:07.800528 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:00.233) 0:03:33.023 ************ 2025-05-19 14:45:07.800535 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800543 | orchestrator | 2025-05-19 14:45:07.800551 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-19 14:45:07.800558 | orchestrator | Monday 19 May 2025 14:37:50 +0000 (0:00:00.204) 0:03:33.228 ************ 2025-05-19 14:45:07.800566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.800574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.800582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.800594 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800602 | orchestrator | 2025-05-19 14:45:07.800610 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-19 14:45:07.800618 | orchestrator | Monday 19 May 2025 14:37:51 +0000 (0:00:00.430) 0:03:33.658 ************ 2025-05-19 14:45:07.800626 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800661 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.800669 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.800677 | orchestrator | 2025-05-19 14:45:07.800710 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-19 14:45:07.800720 | orchestrator | Monday 19 May 2025 14:37:51 +0000 (0:00:00.422) 0:03:34.080 ************ 2025-05-19 14:45:07.800728 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800736 | orchestrator | 2025-05-19 14:45:07.800743 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-19 14:45:07.800751 | orchestrator | Monday 19 May 2025 14:37:51 +0000 (0:00:00.232) 0:03:34.313 ************ 2025-05-19 14:45:07.800759 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800767 | orchestrator | 2025-05-19 14:45:07.800775 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-19 14:45:07.800782 | orchestrator | Monday 19 May 2025 14:37:52 +0000 (0:00:00.224) 0:03:34.538 ************ 2025-05-19 14:45:07.800790 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.800798 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.800806 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.800814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.800822 | orchestrator | 2025-05-19 14:45:07.800829 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-19 14:45:07.800837 | orchestrator | Monday 19 May 2025 14:37:53 +0000 (0:00:00.968) 0:03:35.506 ************ 2025-05-19 14:45:07.800845 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.800853 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.800860 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.800868 | orchestrator | 2025-05-19 14:45:07.800876 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-19 14:45:07.800884 | orchestrator | Monday 19 May 2025 14:37:53 +0000 (0:00:00.263) 0:03:35.770 ************ 2025-05-19 14:45:07.800891 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.800899 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.800907 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.800920 | orchestrator | 2025-05-19 14:45:07.800928 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-19 14:45:07.800936 | orchestrator | Monday 19 May 2025 14:37:54 +0000 (0:00:01.086) 0:03:36.856 ************ 2025-05-19 14:45:07.800944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.800951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.800959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.800967 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.800975 | orchestrator | 2025-05-19 14:45:07.800983 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-19 14:45:07.800990 | orchestrator | Monday 19 May 2025 14:37:55 +0000 (0:00:00.823) 0:03:37.679 ************ 2025-05-19 14:45:07.800998 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.801006 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.801014 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.801021 | orchestrator | 2025-05-19 14:45:07.801029 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-19 14:45:07.801037 | orchestrator | Monday 19 May 2025 14:37:55 +0000 (0:00:00.266) 0:03:37.946 ************ 2025-05-19 14:45:07.801045 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801053 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.801061 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.801068 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.801076 | orchestrator | 2025-05-19 14:45:07.801084 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-19 14:45:07.801092 | orchestrator | Monday 19 May 2025 14:37:56 +0000 (0:00:00.816) 0:03:38.763 ************ 2025-05-19 14:45:07.801100 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.801108 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.801115 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.801123 | orchestrator | 2025-05-19 14:45:07.801131 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-19 14:45:07.801139 | orchestrator | Monday 19 May 2025 14:37:56 +0000 (0:00:00.262) 0:03:39.025 ************ 2025-05-19 14:45:07.801147 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.801154 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.801162 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.801170 | orchestrator | 2025-05-19 14:45:07.801178 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-19 14:45:07.801185 | orchestrator | Monday 19 May 2025 14:37:57 +0000 (0:00:01.076) 0:03:40.102 ************ 2025-05-19 14:45:07.801193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.801201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.801209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.801217 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.801225 | orchestrator | 2025-05-19 14:45:07.801232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-19 14:45:07.801240 | orchestrator | Monday 19 May 2025 14:37:58 +0000 (0:00:00.617) 0:03:40.719 ************ 2025-05-19 14:45:07.801248 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.801256 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.801263 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.801271 | orchestrator | 2025-05-19 14:45:07.801279 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-19 14:45:07.801286 | orchestrator | Monday 19 May 2025 14:37:58 +0000 (0:00:00.250) 0:03:40.970 ************ 2025-05-19 14:45:07.801294 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801302 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.801356 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.801366 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.801379 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.801387 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.801395 | orchestrator | 2025-05-19 14:45:07.801403 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-19 14:45:07.801411 | orchestrator | Monday 19 May 2025 14:37:59 +0000 (0:00:00.697) 0:03:41.667 ************ 2025-05-19 14:45:07.801442 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.801452 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.801460 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.801467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.801475 | orchestrator | 2025-05-19 14:45:07.801483 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-19 14:45:07.801491 | orchestrator | Monday 19 May 2025 14:38:00 +0000 (0:00:00.810) 0:03:42.477 ************ 2025-05-19 14:45:07.801499 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.801507 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.801514 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.801522 | orchestrator | 2025-05-19 14:45:07.801530 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-19 14:45:07.801538 | orchestrator | Monday 19 May 2025 14:38:00 +0000 (0:00:00.281) 0:03:42.758 ************ 2025-05-19 14:45:07.801545 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.801553 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.801561 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.801568 | orchestrator | 2025-05-19 14:45:07.801576 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-19 14:45:07.801584 | orchestrator | Monday 19 May 2025 14:38:01 +0000 (0:00:01.189) 0:03:43.948 ************ 2025-05-19 14:45:07.801592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.801599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.801607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.801615 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801623 | orchestrator | 2025-05-19 14:45:07.801630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-19 14:45:07.801638 | orchestrator | Monday 19 May 2025 14:38:02 +0000 (0:00:00.650) 0:03:44.599 ************ 2025-05-19 14:45:07.801646 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.801654 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.801661 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.801669 | orchestrator | 2025-05-19 14:45:07.801677 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-19 14:45:07.801685 | orchestrator | 2025-05-19 14:45:07.801692 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.801700 | orchestrator | Monday 19 May 2025 14:38:02 +0000 (0:00:00.609) 0:03:45.208 ************ 2025-05-19 14:45:07.801708 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.801716 | orchestrator | 2025-05-19 14:45:07.801723 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.801731 | orchestrator | Monday 19 May 2025 14:38:03 +0000 (0:00:00.410) 0:03:45.619 ************ 2025-05-19 14:45:07.801738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.801778 | orchestrator | 2025-05-19 14:45:07.801785 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.801792 | orchestrator | Monday 19 May 2025 14:38:03 +0000 (0:00:00.585) 0:03:46.205 ************ 2025-05-19 14:45:07.801798 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.801805 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.801811 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.801818 | orchestrator | 2025-05-19 14:45:07.801825 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.801837 | orchestrator | Monday 19 May 2025 14:38:04 +0000 (0:00:00.738) 0:03:46.943 ************ 2025-05-19 14:45:07.801843 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.801857 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.801863 | orchestrator | 2025-05-19 14:45:07.801870 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.801877 | orchestrator | Monday 19 May 2025 14:38:04 +0000 (0:00:00.340) 0:03:47.283 ************ 2025-05-19 14:45:07.801883 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801890 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.801896 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.801903 | orchestrator | 2025-05-19 14:45:07.801909 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.801916 | orchestrator | Monday 19 May 2025 14:38:05 +0000 (0:00:00.285) 0:03:47.569 ************ 2025-05-19 14:45:07.801922 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.801929 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.801936 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.801942 | orchestrator | 2025-05-19 14:45:07.801949 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.801955 | orchestrator | Monday 19 May 2025 14:38:05 +0000 (0:00:00.550) 0:03:48.119 ************ 2025-05-19 14:45:07.801962 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.801968 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.801975 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.801982 | orchestrator | 2025-05-19 14:45:07.801988 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.801995 | orchestrator | Monday 19 May 2025 14:38:06 +0000 (0:00:00.679) 0:03:48.799 ************ 2025-05-19 14:45:07.802002 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802008 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802035 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802043 | orchestrator | 2025-05-19 14:45:07.802054 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.802061 | orchestrator | Monday 19 May 2025 14:38:06 +0000 (0:00:00.281) 0:03:49.080 ************ 2025-05-19 14:45:07.802067 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802074 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802081 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802087 | orchestrator | 2025-05-19 14:45:07.802094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.802123 | orchestrator | Monday 19 May 2025 14:38:07 +0000 (0:00:00.290) 0:03:49.370 ************ 2025-05-19 14:45:07.802131 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802138 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802144 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802151 | orchestrator | 2025-05-19 14:45:07.802158 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.802164 | orchestrator | Monday 19 May 2025 14:38:07 +0000 (0:00:00.818) 0:03:50.189 ************ 2025-05-19 14:45:07.802171 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802178 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802184 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802191 | orchestrator | 2025-05-19 14:45:07.802197 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.802204 | orchestrator | Monday 19 May 2025 14:38:08 +0000 (0:00:00.665) 0:03:50.854 ************ 2025-05-19 14:45:07.802210 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802217 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802230 | orchestrator | 2025-05-19 14:45:07.802237 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.802251 | orchestrator | Monday 19 May 2025 14:38:08 +0000 (0:00:00.249) 0:03:51.103 ************ 2025-05-19 14:45:07.802258 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802264 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802271 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802277 | orchestrator | 2025-05-19 14:45:07.802284 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.802291 | orchestrator | Monday 19 May 2025 14:38:09 +0000 (0:00:00.318) 0:03:51.422 ************ 2025-05-19 14:45:07.802297 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802304 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802324 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802331 | orchestrator | 2025-05-19 14:45:07.802337 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.802344 | orchestrator | Monday 19 May 2025 14:38:09 +0000 (0:00:00.413) 0:03:51.836 ************ 2025-05-19 14:45:07.802351 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802357 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802364 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802370 | orchestrator | 2025-05-19 14:45:07.802377 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.802383 | orchestrator | Monday 19 May 2025 14:38:09 +0000 (0:00:00.257) 0:03:52.093 ************ 2025-05-19 14:45:07.802390 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802397 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802403 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802410 | orchestrator | 2025-05-19 14:45:07.802416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.802423 | orchestrator | Monday 19 May 2025 14:38:10 +0000 (0:00:00.240) 0:03:52.334 ************ 2025-05-19 14:45:07.802429 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802436 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802447 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802458 | orchestrator | 2025-05-19 14:45:07.802465 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.802472 | orchestrator | Monday 19 May 2025 14:38:10 +0000 (0:00:00.237) 0:03:52.572 ************ 2025-05-19 14:45:07.802478 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802485 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.802491 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.802498 | orchestrator | 2025-05-19 14:45:07.802504 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.802511 | orchestrator | Monday 19 May 2025 14:38:10 +0000 (0:00:00.421) 0:03:52.994 ************ 2025-05-19 14:45:07.802518 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802524 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802530 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802537 | orchestrator | 2025-05-19 14:45:07.802543 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.802550 | orchestrator | Monday 19 May 2025 14:38:10 +0000 (0:00:00.285) 0:03:53.279 ************ 2025-05-19 14:45:07.802557 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802563 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802570 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802576 | orchestrator | 2025-05-19 14:45:07.802583 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.802589 | orchestrator | Monday 19 May 2025 14:38:11 +0000 (0:00:00.276) 0:03:53.556 ************ 2025-05-19 14:45:07.802596 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802602 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802609 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802615 | orchestrator | 2025-05-19 14:45:07.802622 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-19 14:45:07.802628 | orchestrator | Monday 19 May 2025 14:38:11 +0000 (0:00:00.568) 0:03:54.124 ************ 2025-05-19 14:45:07.802640 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802646 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802652 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802659 | orchestrator | 2025-05-19 14:45:07.802666 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-19 14:45:07.802672 | orchestrator | Monday 19 May 2025 14:38:12 +0000 (0:00:00.315) 0:03:54.440 ************ 2025-05-19 14:45:07.802679 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.802686 | orchestrator | 2025-05-19 14:45:07.802692 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-19 14:45:07.802703 | orchestrator | Monday 19 May 2025 14:38:12 +0000 (0:00:00.527) 0:03:54.967 ************ 2025-05-19 14:45:07.802710 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.802716 | orchestrator | 2025-05-19 14:45:07.802723 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-19 14:45:07.802729 | orchestrator | Monday 19 May 2025 14:38:12 +0000 (0:00:00.138) 0:03:55.105 ************ 2025-05-19 14:45:07.802736 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 14:45:07.802743 | orchestrator | 2025-05-19 14:45:07.802769 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-19 14:45:07.802777 | orchestrator | Monday 19 May 2025 14:38:14 +0000 (0:00:01.546) 0:03:56.652 ************ 2025-05-19 14:45:07.802783 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802790 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802796 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802802 | orchestrator | 2025-05-19 14:45:07.802809 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-19 14:45:07.802816 | orchestrator | Monday 19 May 2025 14:38:14 +0000 (0:00:00.330) 0:03:56.982 ************ 2025-05-19 14:45:07.802822 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802828 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802835 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802841 | orchestrator | 2025-05-19 14:45:07.802848 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-19 14:45:07.802854 | orchestrator | Monday 19 May 2025 14:38:14 +0000 (0:00:00.326) 0:03:57.309 ************ 2025-05-19 14:45:07.802861 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.802868 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.802874 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.802880 | orchestrator | 2025-05-19 14:45:07.802887 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-19 14:45:07.802893 | orchestrator | Monday 19 May 2025 14:38:16 +0000 (0:00:01.165) 0:03:58.474 ************ 2025-05-19 14:45:07.802900 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.802906 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.802913 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.802919 | orchestrator | 2025-05-19 14:45:07.802926 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-19 14:45:07.802933 | orchestrator | Monday 19 May 2025 14:38:17 +0000 (0:00:01.021) 0:03:59.496 ************ 2025-05-19 14:45:07.802939 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.802946 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.802952 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.802958 | orchestrator | 2025-05-19 14:45:07.802965 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-19 14:45:07.802971 | orchestrator | Monday 19 May 2025 14:38:17 +0000 (0:00:00.643) 0:04:00.140 ************ 2025-05-19 14:45:07.802978 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.802984 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.802991 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.802997 | orchestrator | 2025-05-19 14:45:07.803004 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-19 14:45:07.803010 | orchestrator | Monday 19 May 2025 14:38:18 +0000 (0:00:00.616) 0:04:00.756 ************ 2025-05-19 14:45:07.803023 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803029 | orchestrator | 2025-05-19 14:45:07.803036 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-19 14:45:07.803042 | orchestrator | Monday 19 May 2025 14:38:19 +0000 (0:00:01.246) 0:04:02.003 ************ 2025-05-19 14:45:07.803049 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.803055 | orchestrator | 2025-05-19 14:45:07.803062 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-19 14:45:07.803068 | orchestrator | Monday 19 May 2025 14:38:20 +0000 (0:00:00.638) 0:04:02.641 ************ 2025-05-19 14:45:07.803075 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.803081 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.803088 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.803094 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:45:07.803101 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-19 14:45:07.803108 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:45:07.803114 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:45:07.803121 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-19 14:45:07.803127 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:45:07.803134 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-19 14:45:07.803140 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-19 14:45:07.803147 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-19 14:45:07.803154 | orchestrator | 2025-05-19 14:45:07.803160 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-19 14:45:07.803167 | orchestrator | Monday 19 May 2025 14:38:23 +0000 (0:00:03.385) 0:04:06.027 ************ 2025-05-19 14:45:07.803173 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803180 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803186 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803193 | orchestrator | 2025-05-19 14:45:07.803199 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-19 14:45:07.803206 | orchestrator | Monday 19 May 2025 14:38:25 +0000 (0:00:01.635) 0:04:07.663 ************ 2025-05-19 14:45:07.803212 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.803219 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.803225 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.803231 | orchestrator | 2025-05-19 14:45:07.803238 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-19 14:45:07.803244 | orchestrator | Monday 19 May 2025 14:38:25 +0000 (0:00:00.387) 0:04:08.050 ************ 2025-05-19 14:45:07.803251 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.803257 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.803263 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.803270 | orchestrator | 2025-05-19 14:45:07.803280 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-19 14:45:07.803287 | orchestrator | Monday 19 May 2025 14:38:26 +0000 (0:00:00.298) 0:04:08.349 ************ 2025-05-19 14:45:07.803293 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803300 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803306 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803350 | orchestrator | 2025-05-19 14:45:07.803357 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-19 14:45:07.803384 | orchestrator | Monday 19 May 2025 14:38:28 +0000 (0:00:02.245) 0:04:10.595 ************ 2025-05-19 14:45:07.803392 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803398 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803405 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803412 | orchestrator | 2025-05-19 14:45:07.803418 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-19 14:45:07.803430 | orchestrator | Monday 19 May 2025 14:38:29 +0000 (0:00:01.577) 0:04:12.172 ************ 2025-05-19 14:45:07.803437 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.803443 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.803450 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.803456 | orchestrator | 2025-05-19 14:45:07.803463 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-19 14:45:07.803470 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.228) 0:04:12.400 ************ 2025-05-19 14:45:07.803476 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.803483 | orchestrator | 2025-05-19 14:45:07.803489 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-19 14:45:07.803496 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.383) 0:04:12.784 ************ 2025-05-19 14:45:07.803503 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.803509 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.803516 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.803522 | orchestrator | 2025-05-19 14:45:07.803529 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-19 14:45:07.803535 | orchestrator | Monday 19 May 2025 14:38:30 +0000 (0:00:00.397) 0:04:13.181 ************ 2025-05-19 14:45:07.803542 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.803548 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.803555 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.803561 | orchestrator | 2025-05-19 14:45:07.803568 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-19 14:45:07.803575 | orchestrator | Monday 19 May 2025 14:38:31 +0000 (0:00:00.241) 0:04:13.423 ************ 2025-05-19 14:45:07.803581 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.803588 | orchestrator | 2025-05-19 14:45:07.803594 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-19 14:45:07.803601 | orchestrator | Monday 19 May 2025 14:38:31 +0000 (0:00:00.430) 0:04:13.854 ************ 2025-05-19 14:45:07.803608 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803614 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803621 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803627 | orchestrator | 2025-05-19 14:45:07.803634 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-19 14:45:07.803640 | orchestrator | Monday 19 May 2025 14:38:33 +0000 (0:00:01.511) 0:04:15.365 ************ 2025-05-19 14:45:07.803647 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803653 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803660 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803667 | orchestrator | 2025-05-19 14:45:07.803673 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-19 14:45:07.803680 | orchestrator | Monday 19 May 2025 14:38:34 +0000 (0:00:01.065) 0:04:16.431 ************ 2025-05-19 14:45:07.803686 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803693 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803700 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803706 | orchestrator | 2025-05-19 14:45:07.803713 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-19 14:45:07.803719 | orchestrator | Monday 19 May 2025 14:38:35 +0000 (0:00:01.850) 0:04:18.281 ************ 2025-05-19 14:45:07.803726 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.803733 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.803739 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.803746 | orchestrator | 2025-05-19 14:45:07.803752 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-19 14:45:07.803759 | orchestrator | Monday 19 May 2025 14:38:37 +0000 (0:00:01.910) 0:04:20.192 ************ 2025-05-19 14:45:07.803769 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.803775 | orchestrator | 2025-05-19 14:45:07.803781 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-19 14:45:07.803787 | orchestrator | Monday 19 May 2025 14:38:38 +0000 (0:00:00.916) 0:04:21.108 ************ 2025-05-19 14:45:07.803793 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-19 14:45:07.803799 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.803805 | orchestrator | 2025-05-19 14:45:07.803811 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-19 14:45:07.803818 | orchestrator | Monday 19 May 2025 14:39:00 +0000 (0:00:21.841) 0:04:42.950 ************ 2025-05-19 14:45:07.803824 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.803830 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.803836 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.803842 | orchestrator | 2025-05-19 14:45:07.803848 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-19 14:45:07.803854 | orchestrator | Monday 19 May 2025 14:39:10 +0000 (0:00:10.043) 0:04:52.994 ************ 2025-05-19 14:45:07.803860 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.803866 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.803872 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.803878 | orchestrator | 2025-05-19 14:45:07.803887 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-19 14:45:07.803894 | orchestrator | Monday 19 May 2025 14:39:11 +0000 (0:00:00.461) 0:04:53.455 ************ 2025-05-19 14:45:07.803917 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-19 14:45:07.803927 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-19 14:45:07.803935 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-19 14:45:07.803942 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-19 14:45:07.803949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-19 14:45:07.803955 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0e5486f39df6c0c7fbe3946709e728bbf508b807'}])  2025-05-19 14:45:07.803968 | orchestrator | 2025-05-19 14:45:07.803974 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.803980 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:14.397) 0:05:07.852 ************ 2025-05-19 14:45:07.803986 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.803993 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.803999 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804005 | orchestrator | 2025-05-19 14:45:07.804011 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-19 14:45:07.804017 | orchestrator | Monday 19 May 2025 14:39:25 +0000 (0:00:00.384) 0:05:08.237 ************ 2025-05-19 14:45:07.804023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.804029 | orchestrator | 2025-05-19 14:45:07.804035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-19 14:45:07.804041 | orchestrator | Monday 19 May 2025 14:39:26 +0000 (0:00:00.887) 0:05:09.124 ************ 2025-05-19 14:45:07.804048 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804054 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804060 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804066 | orchestrator | 2025-05-19 14:45:07.804072 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-19 14:45:07.804078 | orchestrator | Monday 19 May 2025 14:39:27 +0000 (0:00:00.331) 0:05:09.455 ************ 2025-05-19 14:45:07.804084 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804091 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804097 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804103 | orchestrator | 2025-05-19 14:45:07.804109 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-19 14:45:07.804115 | orchestrator | Monday 19 May 2025 14:39:27 +0000 (0:00:00.297) 0:05:09.752 ************ 2025-05-19 14:45:07.804121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.804127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.804133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.804139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804145 | orchestrator | 2025-05-19 14:45:07.804152 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-19 14:45:07.804161 | orchestrator | Monday 19 May 2025 14:39:28 +0000 (0:00:00.838) 0:05:10.591 ************ 2025-05-19 14:45:07.804167 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804173 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804179 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804185 | orchestrator | 2025-05-19 14:45:07.804191 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-19 14:45:07.804197 | orchestrator | 2025-05-19 14:45:07.804203 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.804225 | orchestrator | Monday 19 May 2025 14:39:29 +0000 (0:00:00.822) 0:05:11.414 ************ 2025-05-19 14:45:07.804233 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.804239 | orchestrator | 2025-05-19 14:45:07.804245 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.804251 | orchestrator | Monday 19 May 2025 14:39:29 +0000 (0:00:00.431) 0:05:11.845 ************ 2025-05-19 14:45:07.804257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.804263 | orchestrator | 2025-05-19 14:45:07.804269 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.804275 | orchestrator | Monday 19 May 2025 14:39:30 +0000 (0:00:00.525) 0:05:12.370 ************ 2025-05-19 14:45:07.804286 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804292 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804298 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804304 | orchestrator | 2025-05-19 14:45:07.804325 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.804331 | orchestrator | Monday 19 May 2025 14:39:30 +0000 (0:00:00.697) 0:05:13.067 ************ 2025-05-19 14:45:07.804337 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804344 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804350 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804356 | orchestrator | 2025-05-19 14:45:07.804362 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.804368 | orchestrator | Monday 19 May 2025 14:39:30 +0000 (0:00:00.255) 0:05:13.323 ************ 2025-05-19 14:45:07.804374 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804380 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804386 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804392 | orchestrator | 2025-05-19 14:45:07.804398 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.804404 | orchestrator | Monday 19 May 2025 14:39:31 +0000 (0:00:00.378) 0:05:13.702 ************ 2025-05-19 14:45:07.804410 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804416 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804422 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804428 | orchestrator | 2025-05-19 14:45:07.804434 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.804440 | orchestrator | Monday 19 May 2025 14:39:31 +0000 (0:00:00.208) 0:05:13.910 ************ 2025-05-19 14:45:07.804447 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804453 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804459 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804465 | orchestrator | 2025-05-19 14:45:07.804471 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.804477 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.593) 0:05:14.503 ************ 2025-05-19 14:45:07.804483 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804489 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804495 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804502 | orchestrator | 2025-05-19 14:45:07.804508 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.804514 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.241) 0:05:14.745 ************ 2025-05-19 14:45:07.804520 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804526 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804538 | orchestrator | 2025-05-19 14:45:07.804544 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.804550 | orchestrator | Monday 19 May 2025 14:39:32 +0000 (0:00:00.403) 0:05:15.148 ************ 2025-05-19 14:45:07.804556 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804562 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804568 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804574 | orchestrator | 2025-05-19 14:45:07.804581 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.804587 | orchestrator | Monday 19 May 2025 14:39:33 +0000 (0:00:00.668) 0:05:15.817 ************ 2025-05-19 14:45:07.804593 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804599 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804605 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804611 | orchestrator | 2025-05-19 14:45:07.804617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.804623 | orchestrator | Monday 19 May 2025 14:39:34 +0000 (0:00:00.615) 0:05:16.432 ************ 2025-05-19 14:45:07.804629 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804639 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804645 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804651 | orchestrator | 2025-05-19 14:45:07.804657 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.804663 | orchestrator | Monday 19 May 2025 14:39:34 +0000 (0:00:00.262) 0:05:16.694 ************ 2025-05-19 14:45:07.804669 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804675 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804681 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804687 | orchestrator | 2025-05-19 14:45:07.804693 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.804699 | orchestrator | Monday 19 May 2025 14:39:34 +0000 (0:00:00.430) 0:05:17.124 ************ 2025-05-19 14:45:07.804705 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804711 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804723 | orchestrator | 2025-05-19 14:45:07.804733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.804739 | orchestrator | Monday 19 May 2025 14:39:35 +0000 (0:00:00.264) 0:05:17.389 ************ 2025-05-19 14:45:07.804745 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804751 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804757 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804763 | orchestrator | 2025-05-19 14:45:07.804769 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.804792 | orchestrator | Monday 19 May 2025 14:39:35 +0000 (0:00:00.243) 0:05:17.632 ************ 2025-05-19 14:45:07.804799 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804805 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804812 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804818 | orchestrator | 2025-05-19 14:45:07.804824 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.804830 | orchestrator | Monday 19 May 2025 14:39:35 +0000 (0:00:00.250) 0:05:17.883 ************ 2025-05-19 14:45:07.804836 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804842 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804848 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804854 | orchestrator | 2025-05-19 14:45:07.804860 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.804867 | orchestrator | Monday 19 May 2025 14:39:35 +0000 (0:00:00.387) 0:05:18.271 ************ 2025-05-19 14:45:07.804873 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.804879 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.804885 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.804891 | orchestrator | 2025-05-19 14:45:07.804897 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.804903 | orchestrator | Monday 19 May 2025 14:39:36 +0000 (0:00:00.257) 0:05:18.528 ************ 2025-05-19 14:45:07.804909 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804915 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804921 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804927 | orchestrator | 2025-05-19 14:45:07.804933 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.804939 | orchestrator | Monday 19 May 2025 14:39:36 +0000 (0:00:00.274) 0:05:18.803 ************ 2025-05-19 14:45:07.804945 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804951 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804957 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.804963 | orchestrator | 2025-05-19 14:45:07.804969 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.804976 | orchestrator | Monday 19 May 2025 14:39:36 +0000 (0:00:00.268) 0:05:19.071 ************ 2025-05-19 14:45:07.804982 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.804988 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.804993 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.805004 | orchestrator | 2025-05-19 14:45:07.805010 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-19 14:45:07.805016 | orchestrator | Monday 19 May 2025 14:39:37 +0000 (0:00:00.581) 0:05:19.653 ************ 2025-05-19 14:45:07.805022 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:45:07.805029 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.805035 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.805041 | orchestrator | 2025-05-19 14:45:07.805047 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-19 14:45:07.805053 | orchestrator | Monday 19 May 2025 14:39:37 +0000 (0:00:00.561) 0:05:20.214 ************ 2025-05-19 14:45:07.805059 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.805066 | orchestrator | 2025-05-19 14:45:07.805072 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-19 14:45:07.805078 | orchestrator | Monday 19 May 2025 14:39:38 +0000 (0:00:00.521) 0:05:20.735 ************ 2025-05-19 14:45:07.805084 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.805090 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.805096 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.805102 | orchestrator | 2025-05-19 14:45:07.805108 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-19 14:45:07.805114 | orchestrator | Monday 19 May 2025 14:39:39 +0000 (0:00:00.941) 0:05:21.677 ************ 2025-05-19 14:45:07.805120 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805126 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805132 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.805138 | orchestrator | 2025-05-19 14:45:07.805144 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-19 14:45:07.805150 | orchestrator | Monday 19 May 2025 14:39:39 +0000 (0:00:00.364) 0:05:22.041 ************ 2025-05-19 14:45:07.805157 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.805163 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.805169 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.805175 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-19 14:45:07.805181 | orchestrator | 2025-05-19 14:45:07.805187 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-19 14:45:07.805193 | orchestrator | Monday 19 May 2025 14:39:49 +0000 (0:00:10.083) 0:05:32.125 ************ 2025-05-19 14:45:07.805199 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.805205 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.805211 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.805217 | orchestrator | 2025-05-19 14:45:07.805224 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-19 14:45:07.805230 | orchestrator | Monday 19 May 2025 14:39:50 +0000 (0:00:00.308) 0:05:32.433 ************ 2025-05-19 14:45:07.805236 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 14:45:07.805242 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 14:45:07.805248 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 14:45:07.805257 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.805264 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.805270 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.805276 | orchestrator | 2025-05-19 14:45:07.805282 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-19 14:45:07.805288 | orchestrator | Monday 19 May 2025 14:39:52 +0000 (0:00:02.622) 0:05:35.056 ************ 2025-05-19 14:45:07.805323 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 14:45:07.805331 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 14:45:07.805345 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 14:45:07.805351 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:45:07.805358 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 14:45:07.805364 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 14:45:07.805370 | orchestrator | 2025-05-19 14:45:07.805376 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-19 14:45:07.805382 | orchestrator | Monday 19 May 2025 14:39:53 +0000 (0:00:01.117) 0:05:36.174 ************ 2025-05-19 14:45:07.805389 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.805395 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.805401 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.805407 | orchestrator | 2025-05-19 14:45:07.805413 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-19 14:45:07.805419 | orchestrator | Monday 19 May 2025 14:39:54 +0000 (0:00:00.659) 0:05:36.834 ************ 2025-05-19 14:45:07.805426 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805432 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805438 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.805444 | orchestrator | 2025-05-19 14:45:07.805450 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-19 14:45:07.805456 | orchestrator | Monday 19 May 2025 14:39:54 +0000 (0:00:00.294) 0:05:37.129 ************ 2025-05-19 14:45:07.805462 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805468 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.805481 | orchestrator | 2025-05-19 14:45:07.805487 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-19 14:45:07.805493 | orchestrator | Monday 19 May 2025 14:39:55 +0000 (0:00:00.510) 0:05:37.639 ************ 2025-05-19 14:45:07.805499 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.805505 | orchestrator | 2025-05-19 14:45:07.805511 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-19 14:45:07.805518 | orchestrator | Monday 19 May 2025 14:39:55 +0000 (0:00:00.554) 0:05:38.193 ************ 2025-05-19 14:45:07.805524 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805530 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805536 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.805542 | orchestrator | 2025-05-19 14:45:07.805548 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-19 14:45:07.805554 | orchestrator | Monday 19 May 2025 14:39:56 +0000 (0:00:00.367) 0:05:38.561 ************ 2025-05-19 14:45:07.805560 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805567 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805573 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.805579 | orchestrator | 2025-05-19 14:45:07.805585 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-19 14:45:07.805591 | orchestrator | Monday 19 May 2025 14:39:56 +0000 (0:00:00.358) 0:05:38.920 ************ 2025-05-19 14:45:07.805597 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.805604 | orchestrator | 2025-05-19 14:45:07.805610 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-19 14:45:07.805616 | orchestrator | Monday 19 May 2025 14:39:57 +0000 (0:00:00.837) 0:05:39.758 ************ 2025-05-19 14:45:07.805622 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.805628 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.805634 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.805640 | orchestrator | 2025-05-19 14:45:07.805647 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-19 14:45:07.805653 | orchestrator | Monday 19 May 2025 14:39:58 +0000 (0:00:01.182) 0:05:40.940 ************ 2025-05-19 14:45:07.805659 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.805670 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.805676 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.805682 | orchestrator | 2025-05-19 14:45:07.805688 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-19 14:45:07.805694 | orchestrator | Monday 19 May 2025 14:39:59 +0000 (0:00:01.106) 0:05:42.046 ************ 2025-05-19 14:45:07.805700 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.805706 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.805713 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.805719 | orchestrator | 2025-05-19 14:45:07.805725 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-19 14:45:07.805731 | orchestrator | Monday 19 May 2025 14:40:01 +0000 (0:00:02.094) 0:05:44.141 ************ 2025-05-19 14:45:07.805737 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.805743 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.805750 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.805756 | orchestrator | 2025-05-19 14:45:07.805762 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-19 14:45:07.805768 | orchestrator | Monday 19 May 2025 14:40:03 +0000 (0:00:01.939) 0:05:46.080 ************ 2025-05-19 14:45:07.805774 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.805780 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.805786 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-19 14:45:07.805793 | orchestrator | 2025-05-19 14:45:07.805799 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-19 14:45:07.805808 | orchestrator | Monday 19 May 2025 14:40:04 +0000 (0:00:00.362) 0:05:46.443 ************ 2025-05-19 14:45:07.805814 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-19 14:45:07.805821 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-19 14:45:07.805844 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-19 14:45:07.805851 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-19 14:45:07.805857 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.805864 | orchestrator | 2025-05-19 14:45:07.805870 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-19 14:45:07.805876 | orchestrator | Monday 19 May 2025 14:40:28 +0000 (0:00:24.177) 0:06:10.620 ************ 2025-05-19 14:45:07.805882 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.805888 | orchestrator | 2025-05-19 14:45:07.805894 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-19 14:45:07.805900 | orchestrator | Monday 19 May 2025 14:40:29 +0000 (0:00:01.386) 0:06:12.006 ************ 2025-05-19 14:45:07.805907 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.805913 | orchestrator | 2025-05-19 14:45:07.805919 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-19 14:45:07.805925 | orchestrator | Monday 19 May 2025 14:40:30 +0000 (0:00:00.815) 0:06:12.822 ************ 2025-05-19 14:45:07.805931 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.805937 | orchestrator | 2025-05-19 14:45:07.805943 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-19 14:45:07.805949 | orchestrator | Monday 19 May 2025 14:40:30 +0000 (0:00:00.146) 0:06:12.968 ************ 2025-05-19 14:45:07.805955 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-19 14:45:07.805961 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-19 14:45:07.805968 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-19 14:45:07.805974 | orchestrator | 2025-05-19 14:45:07.805980 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-19 14:45:07.805990 | orchestrator | Monday 19 May 2025 14:40:37 +0000 (0:00:06.453) 0:06:19.421 ************ 2025-05-19 14:45:07.805996 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-19 14:45:07.806002 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-19 14:45:07.806008 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-19 14:45:07.806032 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-19 14:45:07.806039 | orchestrator | 2025-05-19 14:45:07.806046 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.806052 | orchestrator | Monday 19 May 2025 14:40:41 +0000 (0:00:04.744) 0:06:24.166 ************ 2025-05-19 14:45:07.806058 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.806064 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.806070 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.806076 | orchestrator | 2025-05-19 14:45:07.806083 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-19 14:45:07.806089 | orchestrator | Monday 19 May 2025 14:40:42 +0000 (0:00:00.948) 0:06:25.115 ************ 2025-05-19 14:45:07.806095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:45:07.806101 | orchestrator | 2025-05-19 14:45:07.806107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-19 14:45:07.806113 | orchestrator | Monday 19 May 2025 14:40:43 +0000 (0:00:00.529) 0:06:25.645 ************ 2025-05-19 14:45:07.806119 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.806126 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.806132 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.806138 | orchestrator | 2025-05-19 14:45:07.806144 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-19 14:45:07.806150 | orchestrator | Monday 19 May 2025 14:40:43 +0000 (0:00:00.336) 0:06:25.981 ************ 2025-05-19 14:45:07.806157 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.806163 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.806169 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.806175 | orchestrator | 2025-05-19 14:45:07.806181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-19 14:45:07.806187 | orchestrator | Monday 19 May 2025 14:40:45 +0000 (0:00:01.546) 0:06:27.527 ************ 2025-05-19 14:45:07.806193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 14:45:07.806200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 14:45:07.806206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 14:45:07.806212 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.806218 | orchestrator | 2025-05-19 14:45:07.806224 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-19 14:45:07.806230 | orchestrator | Monday 19 May 2025 14:40:45 +0000 (0:00:00.644) 0:06:28.172 ************ 2025-05-19 14:45:07.806236 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.806242 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.806248 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.806254 | orchestrator | 2025-05-19 14:45:07.806261 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-19 14:45:07.806267 | orchestrator | 2025-05-19 14:45:07.806273 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.806279 | orchestrator | Monday 19 May 2025 14:40:46 +0000 (0:00:00.521) 0:06:28.693 ************ 2025-05-19 14:45:07.806289 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.806295 | orchestrator | 2025-05-19 14:45:07.806301 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.806307 | orchestrator | Monday 19 May 2025 14:40:47 +0000 (0:00:00.849) 0:06:29.542 ************ 2025-05-19 14:45:07.806351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.806358 | orchestrator | 2025-05-19 14:45:07.806365 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.806371 | orchestrator | Monday 19 May 2025 14:40:47 +0000 (0:00:00.620) 0:06:30.163 ************ 2025-05-19 14:45:07.806377 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806383 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806389 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806395 | orchestrator | 2025-05-19 14:45:07.806401 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.806407 | orchestrator | Monday 19 May 2025 14:40:48 +0000 (0:00:00.291) 0:06:30.454 ************ 2025-05-19 14:45:07.806413 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806419 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806425 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806431 | orchestrator | 2025-05-19 14:45:07.806437 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.806443 | orchestrator | Monday 19 May 2025 14:40:49 +0000 (0:00:01.044) 0:06:31.498 ************ 2025-05-19 14:45:07.806449 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806455 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806461 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806467 | orchestrator | 2025-05-19 14:45:07.806473 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.806479 | orchestrator | Monday 19 May 2025 14:40:49 +0000 (0:00:00.683) 0:06:32.182 ************ 2025-05-19 14:45:07.806485 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806491 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806497 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806503 | orchestrator | 2025-05-19 14:45:07.806509 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.806515 | orchestrator | Monday 19 May 2025 14:40:50 +0000 (0:00:00.662) 0:06:32.845 ************ 2025-05-19 14:45:07.806521 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806527 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806533 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806539 | orchestrator | 2025-05-19 14:45:07.806545 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.806551 | orchestrator | Monday 19 May 2025 14:40:50 +0000 (0:00:00.326) 0:06:33.171 ************ 2025-05-19 14:45:07.806557 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806563 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806569 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806575 | orchestrator | 2025-05-19 14:45:07.806581 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.806587 | orchestrator | Monday 19 May 2025 14:40:51 +0000 (0:00:00.492) 0:06:33.664 ************ 2025-05-19 14:45:07.806593 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806599 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806605 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806611 | orchestrator | 2025-05-19 14:45:07.806617 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.806624 | orchestrator | Monday 19 May 2025 14:40:51 +0000 (0:00:00.274) 0:06:33.939 ************ 2025-05-19 14:45:07.806630 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806636 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806642 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806648 | orchestrator | 2025-05-19 14:45:07.806654 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.806660 | orchestrator | Monday 19 May 2025 14:40:52 +0000 (0:00:00.634) 0:06:34.573 ************ 2025-05-19 14:45:07.806666 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806676 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806683 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806689 | orchestrator | 2025-05-19 14:45:07.806695 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.806701 | orchestrator | Monday 19 May 2025 14:40:52 +0000 (0:00:00.649) 0:06:35.223 ************ 2025-05-19 14:45:07.806707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806713 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806719 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806725 | orchestrator | 2025-05-19 14:45:07.806732 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.806738 | orchestrator | Monday 19 May 2025 14:40:53 +0000 (0:00:00.588) 0:06:35.811 ************ 2025-05-19 14:45:07.806744 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806750 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806756 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806762 | orchestrator | 2025-05-19 14:45:07.806768 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.806774 | orchestrator | Monday 19 May 2025 14:40:53 +0000 (0:00:00.289) 0:06:36.101 ************ 2025-05-19 14:45:07.806780 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806786 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806792 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806798 | orchestrator | 2025-05-19 14:45:07.806804 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.806810 | orchestrator | Monday 19 May 2025 14:40:54 +0000 (0:00:00.342) 0:06:36.443 ************ 2025-05-19 14:45:07.806816 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806822 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806828 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806834 | orchestrator | 2025-05-19 14:45:07.806841 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.806847 | orchestrator | Monday 19 May 2025 14:40:54 +0000 (0:00:00.348) 0:06:36.791 ************ 2025-05-19 14:45:07.806853 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.806862 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.806868 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.806874 | orchestrator | 2025-05-19 14:45:07.806880 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.806886 | orchestrator | Monday 19 May 2025 14:40:55 +0000 (0:00:00.787) 0:06:37.579 ************ 2025-05-19 14:45:07.806893 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806899 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806905 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806911 | orchestrator | 2025-05-19 14:45:07.806920 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.806926 | orchestrator | Monday 19 May 2025 14:40:55 +0000 (0:00:00.405) 0:06:37.985 ************ 2025-05-19 14:45:07.806932 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806939 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806945 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806951 | orchestrator | 2025-05-19 14:45:07.806957 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.806963 | orchestrator | Monday 19 May 2025 14:40:56 +0000 (0:00:00.414) 0:06:38.400 ************ 2025-05-19 14:45:07.806969 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.806975 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.806981 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.806987 | orchestrator | 2025-05-19 14:45:07.806993 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.806999 | orchestrator | Monday 19 May 2025 14:40:56 +0000 (0:00:00.309) 0:06:38.709 ************ 2025-05-19 14:45:07.807005 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807011 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807017 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807028 | orchestrator | 2025-05-19 14:45:07.807034 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.807040 | orchestrator | Monday 19 May 2025 14:40:57 +0000 (0:00:00.676) 0:06:39.386 ************ 2025-05-19 14:45:07.807046 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807052 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807058 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807064 | orchestrator | 2025-05-19 14:45:07.807070 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-19 14:45:07.807076 | orchestrator | Monday 19 May 2025 14:40:57 +0000 (0:00:00.574) 0:06:39.960 ************ 2025-05-19 14:45:07.807082 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807088 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807094 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807100 | orchestrator | 2025-05-19 14:45:07.807107 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-19 14:45:07.807113 | orchestrator | Monday 19 May 2025 14:40:57 +0000 (0:00:00.318) 0:06:40.278 ************ 2025-05-19 14:45:07.807119 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 14:45:07.807125 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:45:07.807131 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:45:07.807137 | orchestrator | 2025-05-19 14:45:07.807143 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-19 14:45:07.807149 | orchestrator | Monday 19 May 2025 14:40:58 +0000 (0:00:00.834) 0:06:41.113 ************ 2025-05-19 14:45:07.807155 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.807161 | orchestrator | 2025-05-19 14:45:07.807181 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-19 14:45:07.807187 | orchestrator | Monday 19 May 2025 14:40:59 +0000 (0:00:00.708) 0:06:41.821 ************ 2025-05-19 14:45:07.807193 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.807199 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.807205 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.807211 | orchestrator | 2025-05-19 14:45:07.807217 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-19 14:45:07.807223 | orchestrator | Monday 19 May 2025 14:40:59 +0000 (0:00:00.292) 0:06:42.114 ************ 2025-05-19 14:45:07.807230 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.807236 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.807242 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.807248 | orchestrator | 2025-05-19 14:45:07.807254 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-19 14:45:07.807260 | orchestrator | Monday 19 May 2025 14:41:00 +0000 (0:00:00.275) 0:06:42.390 ************ 2025-05-19 14:45:07.807266 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807272 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807278 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807284 | orchestrator | 2025-05-19 14:45:07.807290 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-19 14:45:07.807296 | orchestrator | Monday 19 May 2025 14:41:00 +0000 (0:00:00.901) 0:06:43.291 ************ 2025-05-19 14:45:07.807302 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807340 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807347 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807353 | orchestrator | 2025-05-19 14:45:07.807360 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-19 14:45:07.807366 | orchestrator | Monday 19 May 2025 14:41:01 +0000 (0:00:00.377) 0:06:43.669 ************ 2025-05-19 14:45:07.807372 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 14:45:07.807378 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 14:45:07.807388 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 14:45:07.807395 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 14:45:07.807404 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 14:45:07.807410 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 14:45:07.807416 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 14:45:07.807422 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 14:45:07.807433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 14:45:07.807440 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 14:45:07.807446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 14:45:07.807452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 14:45:07.807458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 14:45:07.807464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 14:45:07.807470 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 14:45:07.807476 | orchestrator | 2025-05-19 14:45:07.807482 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-19 14:45:07.807488 | orchestrator | Monday 19 May 2025 14:41:04 +0000 (0:00:02.800) 0:06:46.469 ************ 2025-05-19 14:45:07.807494 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.807500 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.807506 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.807512 | orchestrator | 2025-05-19 14:45:07.807519 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-19 14:45:07.807525 | orchestrator | Monday 19 May 2025 14:41:04 +0000 (0:00:00.318) 0:06:46.787 ************ 2025-05-19 14:45:07.807530 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.807537 | orchestrator | 2025-05-19 14:45:07.807542 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-19 14:45:07.807549 | orchestrator | Monday 19 May 2025 14:41:05 +0000 (0:00:00.909) 0:06:47.697 ************ 2025-05-19 14:45:07.807555 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 14:45:07.807560 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 14:45:07.807566 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 14:45:07.807572 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-19 14:45:07.807578 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-19 14:45:07.807585 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-19 14:45:07.807591 | orchestrator | 2025-05-19 14:45:07.807597 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-19 14:45:07.807603 | orchestrator | Monday 19 May 2025 14:41:06 +0000 (0:00:00.891) 0:06:48.589 ************ 2025-05-19 14:45:07.807609 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.807615 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.807621 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.807627 | orchestrator | 2025-05-19 14:45:07.807633 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-19 14:45:07.807639 | orchestrator | Monday 19 May 2025 14:41:08 +0000 (0:00:01.896) 0:06:50.485 ************ 2025-05-19 14:45:07.807649 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:45:07.807655 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.807661 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.807667 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:45:07.807673 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 14:45:07.807679 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.807685 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:45:07.807691 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 14:45:07.807698 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.807704 | orchestrator | 2025-05-19 14:45:07.807710 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-19 14:45:07.807716 | orchestrator | Monday 19 May 2025 14:41:09 +0000 (0:00:01.340) 0:06:51.826 ************ 2025-05-19 14:45:07.807722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.807727 | orchestrator | 2025-05-19 14:45:07.807733 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-19 14:45:07.807738 | orchestrator | Monday 19 May 2025 14:41:11 +0000 (0:00:02.021) 0:06:53.848 ************ 2025-05-19 14:45:07.807743 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.807748 | orchestrator | 2025-05-19 14:45:07.807754 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-19 14:45:07.807759 | orchestrator | Monday 19 May 2025 14:41:12 +0000 (0:00:00.568) 0:06:54.416 ************ 2025-05-19 14:45:07.807764 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-14b77220-8a02-5c14-b369-aaa75d02e7a5', 'data_vg': 'ceph-14b77220-8a02-5c14-b369-aaa75d02e7a5'}) 2025-05-19 14:45:07.807770 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f79a0596-c901-5dda-8c3d-7673c0794e9f', 'data_vg': 'ceph-f79a0596-c901-5dda-8c3d-7673c0794e9f'}) 2025-05-19 14:45:07.807778 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18cd8a80-96d5-5946-80eb-7d63885b2b76', 'data_vg': 'ceph-18cd8a80-96d5-5946-80eb-7d63885b2b76'}) 2025-05-19 14:45:07.807784 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d28da045-49d6-58b1-95f0-26301c413660', 'data_vg': 'ceph-d28da045-49d6-58b1-95f0-26301c413660'}) 2025-05-19 14:45:07.807792 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-be132d09-93e5-58e2-99ec-48d3b83dc2dd', 'data_vg': 'ceph-be132d09-93e5-58e2-99ec-48d3b83dc2dd'}) 2025-05-19 14:45:07.807798 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad566f4e-67fb-565a-8346-037c8100dc24', 'data_vg': 'ceph-ad566f4e-67fb-565a-8346-037c8100dc24'}) 2025-05-19 14:45:07.807803 | orchestrator | 2025-05-19 14:45:07.807809 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-19 14:45:07.807814 | orchestrator | Monday 19 May 2025 14:41:54 +0000 (0:00:41.965) 0:07:36.382 ************ 2025-05-19 14:45:07.807819 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.807825 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.807830 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.807835 | orchestrator | 2025-05-19 14:45:07.807841 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-19 14:45:07.807846 | orchestrator | Monday 19 May 2025 14:41:54 +0000 (0:00:00.659) 0:07:37.042 ************ 2025-05-19 14:45:07.807851 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.807857 | orchestrator | 2025-05-19 14:45:07.807862 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-19 14:45:07.807867 | orchestrator | Monday 19 May 2025 14:41:55 +0000 (0:00:00.572) 0:07:37.614 ************ 2025-05-19 14:45:07.807873 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807878 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807883 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807892 | orchestrator | 2025-05-19 14:45:07.807897 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-19 14:45:07.807903 | orchestrator | Monday 19 May 2025 14:41:55 +0000 (0:00:00.637) 0:07:38.252 ************ 2025-05-19 14:45:07.807908 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.807913 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.807919 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.807924 | orchestrator | 2025-05-19 14:45:07.807929 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-19 14:45:07.807934 | orchestrator | Monday 19 May 2025 14:41:58 +0000 (0:00:02.650) 0:07:40.902 ************ 2025-05-19 14:45:07.807940 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.807945 | orchestrator | 2025-05-19 14:45:07.807951 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-19 14:45:07.807956 | orchestrator | Monday 19 May 2025 14:41:59 +0000 (0:00:00.463) 0:07:41.365 ************ 2025-05-19 14:45:07.807961 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.807967 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.807972 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.807977 | orchestrator | 2025-05-19 14:45:07.807983 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-19 14:45:07.807988 | orchestrator | Monday 19 May 2025 14:42:00 +0000 (0:00:01.173) 0:07:42.539 ************ 2025-05-19 14:45:07.807993 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.807998 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.808004 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.808009 | orchestrator | 2025-05-19 14:45:07.808014 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-19 14:45:07.808020 | orchestrator | Monday 19 May 2025 14:42:01 +0000 (0:00:01.265) 0:07:43.804 ************ 2025-05-19 14:45:07.808025 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.808030 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.808036 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.808041 | orchestrator | 2025-05-19 14:45:07.808046 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-19 14:45:07.808052 | orchestrator | Monday 19 May 2025 14:42:03 +0000 (0:00:01.854) 0:07:45.659 ************ 2025-05-19 14:45:07.808057 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808062 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808067 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808073 | orchestrator | 2025-05-19 14:45:07.808078 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-19 14:45:07.808083 | orchestrator | Monday 19 May 2025 14:42:03 +0000 (0:00:00.276) 0:07:45.935 ************ 2025-05-19 14:45:07.808088 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808094 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808099 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808104 | orchestrator | 2025-05-19 14:45:07.808110 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-19 14:45:07.808115 | orchestrator | Monday 19 May 2025 14:42:03 +0000 (0:00:00.267) 0:07:46.203 ************ 2025-05-19 14:45:07.808120 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 14:45:07.808126 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-19 14:45:07.808131 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-19 14:45:07.808136 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-19 14:45:07.808141 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-19 14:45:07.808147 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-19 14:45:07.808152 | orchestrator | 2025-05-19 14:45:07.808157 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-19 14:45:07.808163 | orchestrator | Monday 19 May 2025 14:42:05 +0000 (0:00:01.144) 0:07:47.347 ************ 2025-05-19 14:45:07.808168 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-19 14:45:07.808177 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-19 14:45:07.808182 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-19 14:45:07.808188 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-19 14:45:07.808222 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-19 14:45:07.808228 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-19 14:45:07.808234 | orchestrator | 2025-05-19 14:45:07.808239 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-19 14:45:07.808244 | orchestrator | Monday 19 May 2025 14:42:07 +0000 (0:00:01.996) 0:07:49.344 ************ 2025-05-19 14:45:07.808250 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-19 14:45:07.808255 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-19 14:45:07.808264 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-19 14:45:07.808270 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-19 14:45:07.808275 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-19 14:45:07.808280 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-19 14:45:07.808286 | orchestrator | 2025-05-19 14:45:07.808291 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-19 14:45:07.808297 | orchestrator | Monday 19 May 2025 14:42:10 +0000 (0:00:03.410) 0:07:52.754 ************ 2025-05-19 14:45:07.808302 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808307 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.808329 | orchestrator | 2025-05-19 14:45:07.808334 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-19 14:45:07.808339 | orchestrator | Monday 19 May 2025 14:42:13 +0000 (0:00:02.945) 0:07:55.700 ************ 2025-05-19 14:45:07.808345 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808350 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808355 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-19 14:45:07.808361 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.808366 | orchestrator | 2025-05-19 14:45:07.808372 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-19 14:45:07.808377 | orchestrator | Monday 19 May 2025 14:42:26 +0000 (0:00:12.702) 0:08:08.402 ************ 2025-05-19 14:45:07.808382 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808387 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808393 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808398 | orchestrator | 2025-05-19 14:45:07.808403 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.808409 | orchestrator | Monday 19 May 2025 14:42:26 +0000 (0:00:00.818) 0:08:09.221 ************ 2025-05-19 14:45:07.808414 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808419 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808425 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808430 | orchestrator | 2025-05-19 14:45:07.808435 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-19 14:45:07.808441 | orchestrator | Monday 19 May 2025 14:42:27 +0000 (0:00:00.563) 0:08:09.784 ************ 2025-05-19 14:45:07.808446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.808451 | orchestrator | 2025-05-19 14:45:07.808457 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-19 14:45:07.808462 | orchestrator | Monday 19 May 2025 14:42:27 +0000 (0:00:00.486) 0:08:10.270 ************ 2025-05-19 14:45:07.808467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.808473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.808478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.808483 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808493 | orchestrator | 2025-05-19 14:45:07.808499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-19 14:45:07.808504 | orchestrator | Monday 19 May 2025 14:42:28 +0000 (0:00:00.367) 0:08:10.638 ************ 2025-05-19 14:45:07.808509 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808515 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808520 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808525 | orchestrator | 2025-05-19 14:45:07.808531 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-19 14:45:07.808536 | orchestrator | Monday 19 May 2025 14:42:28 +0000 (0:00:00.278) 0:08:10.917 ************ 2025-05-19 14:45:07.808541 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808546 | orchestrator | 2025-05-19 14:45:07.808552 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-19 14:45:07.808557 | orchestrator | Monday 19 May 2025 14:42:28 +0000 (0:00:00.200) 0:08:11.117 ************ 2025-05-19 14:45:07.808562 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808568 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808573 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808578 | orchestrator | 2025-05-19 14:45:07.808584 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-19 14:45:07.808589 | orchestrator | Monday 19 May 2025 14:42:29 +0000 (0:00:00.519) 0:08:11.636 ************ 2025-05-19 14:45:07.808594 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808600 | orchestrator | 2025-05-19 14:45:07.808605 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-19 14:45:07.808610 | orchestrator | Monday 19 May 2025 14:42:29 +0000 (0:00:00.209) 0:08:11.846 ************ 2025-05-19 14:45:07.808615 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808621 | orchestrator | 2025-05-19 14:45:07.808626 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-19 14:45:07.808631 | orchestrator | Monday 19 May 2025 14:42:29 +0000 (0:00:00.199) 0:08:12.045 ************ 2025-05-19 14:45:07.808636 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808642 | orchestrator | 2025-05-19 14:45:07.808647 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-19 14:45:07.808652 | orchestrator | Monday 19 May 2025 14:42:29 +0000 (0:00:00.117) 0:08:12.163 ************ 2025-05-19 14:45:07.808658 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808663 | orchestrator | 2025-05-19 14:45:07.808671 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-19 14:45:07.808677 | orchestrator | Monday 19 May 2025 14:42:30 +0000 (0:00:00.208) 0:08:12.371 ************ 2025-05-19 14:45:07.808682 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808687 | orchestrator | 2025-05-19 14:45:07.808692 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-19 14:45:07.808698 | orchestrator | Monday 19 May 2025 14:42:30 +0000 (0:00:00.215) 0:08:12.587 ************ 2025-05-19 14:45:07.808706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.808711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.808717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.808722 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808727 | orchestrator | 2025-05-19 14:45:07.808733 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-19 14:45:07.808738 | orchestrator | Monday 19 May 2025 14:42:30 +0000 (0:00:00.380) 0:08:12.967 ************ 2025-05-19 14:45:07.808743 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808749 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808754 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808760 | orchestrator | 2025-05-19 14:45:07.808765 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-19 14:45:07.808770 | orchestrator | Monday 19 May 2025 14:42:30 +0000 (0:00:00.295) 0:08:13.263 ************ 2025-05-19 14:45:07.808779 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808785 | orchestrator | 2025-05-19 14:45:07.808790 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-19 14:45:07.808795 | orchestrator | Monday 19 May 2025 14:42:31 +0000 (0:00:00.737) 0:08:14.000 ************ 2025-05-19 14:45:07.808801 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808806 | orchestrator | 2025-05-19 14:45:07.808811 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-19 14:45:07.808817 | orchestrator | 2025-05-19 14:45:07.808822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.808827 | orchestrator | Monday 19 May 2025 14:42:32 +0000 (0:00:00.635) 0:08:14.636 ************ 2025-05-19 14:45:07.808833 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.808838 | orchestrator | 2025-05-19 14:45:07.808843 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.808849 | orchestrator | Monday 19 May 2025 14:42:33 +0000 (0:00:01.273) 0:08:15.909 ************ 2025-05-19 14:45:07.808854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.808859 | orchestrator | 2025-05-19 14:45:07.808865 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.808870 | orchestrator | Monday 19 May 2025 14:42:34 +0000 (0:00:01.176) 0:08:17.085 ************ 2025-05-19 14:45:07.808875 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.808881 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.808886 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.808891 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.808896 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.808902 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.808907 | orchestrator | 2025-05-19 14:45:07.808912 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.808917 | orchestrator | Monday 19 May 2025 14:42:35 +0000 (0:00:00.786) 0:08:17.872 ************ 2025-05-19 14:45:07.808923 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.808928 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.808934 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.808939 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.808944 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.808950 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.808955 | orchestrator | 2025-05-19 14:45:07.808960 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.808965 | orchestrator | Monday 19 May 2025 14:42:36 +0000 (0:00:00.948) 0:08:18.821 ************ 2025-05-19 14:45:07.808971 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.808976 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.808981 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.808986 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.808992 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.808997 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809002 | orchestrator | 2025-05-19 14:45:07.809007 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.809013 | orchestrator | Monday 19 May 2025 14:42:37 +0000 (0:00:01.181) 0:08:20.002 ************ 2025-05-19 14:45:07.809018 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809023 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809029 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809034 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809039 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809045 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809050 | orchestrator | 2025-05-19 14:45:07.809055 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.809065 | orchestrator | Monday 19 May 2025 14:42:38 +0000 (0:00:00.948) 0:08:20.951 ************ 2025-05-19 14:45:07.809070 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809076 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809081 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809086 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809091 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809097 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809102 | orchestrator | 2025-05-19 14:45:07.809107 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.809113 | orchestrator | Monday 19 May 2025 14:42:39 +0000 (0:00:00.745) 0:08:21.696 ************ 2025-05-19 14:45:07.809118 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809123 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809131 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809137 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809142 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809147 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809152 | orchestrator | 2025-05-19 14:45:07.809158 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.809163 | orchestrator | Monday 19 May 2025 14:42:39 +0000 (0:00:00.546) 0:08:22.243 ************ 2025-05-19 14:45:07.809171 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809176 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809182 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809187 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809192 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809197 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809202 | orchestrator | 2025-05-19 14:45:07.809208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.809213 | orchestrator | Monday 19 May 2025 14:42:40 +0000 (0:00:00.774) 0:08:23.018 ************ 2025-05-19 14:45:07.809218 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809224 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809229 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809234 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809239 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809245 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809250 | orchestrator | 2025-05-19 14:45:07.809255 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.809260 | orchestrator | Monday 19 May 2025 14:42:41 +0000 (0:00:00.948) 0:08:23.967 ************ 2025-05-19 14:45:07.809266 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809271 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809276 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809281 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809287 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809292 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809297 | orchestrator | 2025-05-19 14:45:07.809302 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.809323 | orchestrator | Monday 19 May 2025 14:42:42 +0000 (0:00:01.192) 0:08:25.159 ************ 2025-05-19 14:45:07.809329 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809334 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809339 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809345 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809350 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809355 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809360 | orchestrator | 2025-05-19 14:45:07.809366 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.809371 | orchestrator | Monday 19 May 2025 14:42:43 +0000 (0:00:00.559) 0:08:25.719 ************ 2025-05-19 14:45:07.809376 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809382 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809391 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809397 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809402 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809407 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809412 | orchestrator | 2025-05-19 14:45:07.809418 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.809423 | orchestrator | Monday 19 May 2025 14:42:44 +0000 (0:00:00.748) 0:08:26.467 ************ 2025-05-19 14:45:07.809428 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809433 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809439 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809444 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809449 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809455 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809460 | orchestrator | 2025-05-19 14:45:07.809465 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.809471 | orchestrator | Monday 19 May 2025 14:42:44 +0000 (0:00:00.594) 0:08:27.062 ************ 2025-05-19 14:45:07.809476 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809481 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809486 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809492 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809497 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809502 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809508 | orchestrator | 2025-05-19 14:45:07.809513 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.809518 | orchestrator | Monday 19 May 2025 14:42:45 +0000 (0:00:00.794) 0:08:27.856 ************ 2025-05-19 14:45:07.809524 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809529 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809534 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809539 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809545 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809550 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809555 | orchestrator | 2025-05-19 14:45:07.809561 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.809566 | orchestrator | Monday 19 May 2025 14:42:46 +0000 (0:00:00.622) 0:08:28.479 ************ 2025-05-19 14:45:07.809571 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809576 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809587 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809592 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809598 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809603 | orchestrator | 2025-05-19 14:45:07.809608 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.809614 | orchestrator | Monday 19 May 2025 14:42:46 +0000 (0:00:00.785) 0:08:29.264 ************ 2025-05-19 14:45:07.809619 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:45:07.809624 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:45:07.809629 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:45:07.809635 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809640 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809666 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809672 | orchestrator | 2025-05-19 14:45:07.809677 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.809682 | orchestrator | Monday 19 May 2025 14:42:47 +0000 (0:00:00.665) 0:08:29.930 ************ 2025-05-19 14:45:07.809691 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809697 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809702 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.809713 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.809718 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.809727 | orchestrator | 2025-05-19 14:45:07.809733 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.809741 | orchestrator | Monday 19 May 2025 14:42:48 +0000 (0:00:00.749) 0:08:30.680 ************ 2025-05-19 14:45:07.809747 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809752 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809757 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809762 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809768 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809773 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809778 | orchestrator | 2025-05-19 14:45:07.809783 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.809789 | orchestrator | Monday 19 May 2025 14:42:48 +0000 (0:00:00.590) 0:08:31.271 ************ 2025-05-19 14:45:07.809794 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809799 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.809804 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.809810 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.809815 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.809820 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.809825 | orchestrator | 2025-05-19 14:45:07.809831 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-19 14:45:07.809836 | orchestrator | Monday 19 May 2025 14:42:50 +0000 (0:00:01.138) 0:08:32.409 ************ 2025-05-19 14:45:07.809841 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.809847 | orchestrator | 2025-05-19 14:45:07.809852 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-19 14:45:07.809858 | orchestrator | Monday 19 May 2025 14:42:54 +0000 (0:00:03.943) 0:08:36.353 ************ 2025-05-19 14:45:07.809863 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809868 | orchestrator | 2025-05-19 14:45:07.809874 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-19 14:45:07.809879 | orchestrator | Monday 19 May 2025 14:42:55 +0000 (0:00:01.929) 0:08:38.283 ************ 2025-05-19 14:45:07.809884 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.809890 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.809895 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.809900 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.809905 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.809911 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.809916 | orchestrator | 2025-05-19 14:45:07.809921 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-19 14:45:07.809927 | orchestrator | Monday 19 May 2025 14:42:57 +0000 (0:00:01.801) 0:08:40.084 ************ 2025-05-19 14:45:07.809932 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.809937 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.809943 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.809948 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.809953 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.809958 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.809964 | orchestrator | 2025-05-19 14:45:07.809969 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-19 14:45:07.809974 | orchestrator | Monday 19 May 2025 14:42:58 +0000 (0:00:00.906) 0:08:40.991 ************ 2025-05-19 14:45:07.809980 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.809986 | orchestrator | 2025-05-19 14:45:07.809991 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-19 14:45:07.809996 | orchestrator | Monday 19 May 2025 14:42:59 +0000 (0:00:01.224) 0:08:42.215 ************ 2025-05-19 14:45:07.810002 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.810007 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.810012 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.810036 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.810045 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.810051 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.810056 | orchestrator | 2025-05-19 14:45:07.810061 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-19 14:45:07.810066 | orchestrator | Monday 19 May 2025 14:43:01 +0000 (0:00:01.848) 0:08:44.064 ************ 2025-05-19 14:45:07.810072 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.810077 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.810082 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.810087 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.810093 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.810098 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.810103 | orchestrator | 2025-05-19 14:45:07.810109 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-19 14:45:07.810114 | orchestrator | Monday 19 May 2025 14:43:04 +0000 (0:00:02.949) 0:08:47.014 ************ 2025-05-19 14:45:07.810119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.810125 | orchestrator | 2025-05-19 14:45:07.810130 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-19 14:45:07.810136 | orchestrator | Monday 19 May 2025 14:43:05 +0000 (0:00:01.040) 0:08:48.054 ************ 2025-05-19 14:45:07.810141 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.810146 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.810151 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.810157 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810162 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810167 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810173 | orchestrator | 2025-05-19 14:45:07.810178 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-19 14:45:07.810183 | orchestrator | Monday 19 May 2025 14:43:06 +0000 (0:00:00.621) 0:08:48.676 ************ 2025-05-19 14:45:07.810189 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:45:07.810194 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:45:07.810202 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:45:07.810208 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.810213 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.810218 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.810223 | orchestrator | 2025-05-19 14:45:07.810229 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-19 14:45:07.810234 | orchestrator | Monday 19 May 2025 14:43:08 +0000 (0:00:02.129) 0:08:50.805 ************ 2025-05-19 14:45:07.810243 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:45:07.810248 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:45:07.810254 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:45:07.810259 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810264 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810269 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810275 | orchestrator | 2025-05-19 14:45:07.810280 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-19 14:45:07.810285 | orchestrator | 2025-05-19 14:45:07.810291 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.810296 | orchestrator | Monday 19 May 2025 14:43:09 +0000 (0:00:01.128) 0:08:51.934 ************ 2025-05-19 14:45:07.810302 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.810307 | orchestrator | 2025-05-19 14:45:07.810324 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.810330 | orchestrator | Monday 19 May 2025 14:43:10 +0000 (0:00:00.493) 0:08:52.427 ************ 2025-05-19 14:45:07.810335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.810344 | orchestrator | 2025-05-19 14:45:07.810349 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.810354 | orchestrator | Monday 19 May 2025 14:43:10 +0000 (0:00:00.758) 0:08:53.186 ************ 2025-05-19 14:45:07.810360 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810365 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810370 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810376 | orchestrator | 2025-05-19 14:45:07.810381 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.810386 | orchestrator | Monday 19 May 2025 14:43:11 +0000 (0:00:00.296) 0:08:53.482 ************ 2025-05-19 14:45:07.810391 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810397 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810402 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810407 | orchestrator | 2025-05-19 14:45:07.810413 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.810418 | orchestrator | Monday 19 May 2025 14:43:11 +0000 (0:00:00.659) 0:08:54.142 ************ 2025-05-19 14:45:07.810423 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810429 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810434 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810439 | orchestrator | 2025-05-19 14:45:07.810444 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.810450 | orchestrator | Monday 19 May 2025 14:43:12 +0000 (0:00:01.047) 0:08:55.190 ************ 2025-05-19 14:45:07.810455 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810460 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810465 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810471 | orchestrator | 2025-05-19 14:45:07.810476 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.810482 | orchestrator | Monday 19 May 2025 14:43:13 +0000 (0:00:00.763) 0:08:55.954 ************ 2025-05-19 14:45:07.810487 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810492 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810497 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810503 | orchestrator | 2025-05-19 14:45:07.810508 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.810513 | orchestrator | Monday 19 May 2025 14:43:14 +0000 (0:00:00.539) 0:08:56.494 ************ 2025-05-19 14:45:07.810519 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810524 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810529 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810534 | orchestrator | 2025-05-19 14:45:07.810540 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.810545 | orchestrator | Monday 19 May 2025 14:43:14 +0000 (0:00:00.477) 0:08:56.971 ************ 2025-05-19 14:45:07.810550 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810555 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810561 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810566 | orchestrator | 2025-05-19 14:45:07.810571 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.810577 | orchestrator | Monday 19 May 2025 14:43:15 +0000 (0:00:00.635) 0:08:57.607 ************ 2025-05-19 14:45:07.810582 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810587 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810592 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810598 | orchestrator | 2025-05-19 14:45:07.810603 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.810608 | orchestrator | Monday 19 May 2025 14:43:16 +0000 (0:00:00.783) 0:08:58.391 ************ 2025-05-19 14:45:07.810614 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810619 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810624 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810629 | orchestrator | 2025-05-19 14:45:07.810635 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.810647 | orchestrator | Monday 19 May 2025 14:43:16 +0000 (0:00:00.686) 0:08:59.077 ************ 2025-05-19 14:45:07.810653 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810658 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810663 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810669 | orchestrator | 2025-05-19 14:45:07.810674 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.810679 | orchestrator | Monday 19 May 2025 14:43:16 +0000 (0:00:00.240) 0:08:59.317 ************ 2025-05-19 14:45:07.810685 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810690 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810698 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810703 | orchestrator | 2025-05-19 14:45:07.810709 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.810714 | orchestrator | Monday 19 May 2025 14:43:17 +0000 (0:00:00.414) 0:08:59.732 ************ 2025-05-19 14:45:07.810719 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810725 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810730 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810735 | orchestrator | 2025-05-19 14:45:07.810743 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.810749 | orchestrator | Monday 19 May 2025 14:43:17 +0000 (0:00:00.270) 0:09:00.002 ************ 2025-05-19 14:45:07.810754 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810759 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810764 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810770 | orchestrator | 2025-05-19 14:45:07.810775 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.810780 | orchestrator | Monday 19 May 2025 14:43:17 +0000 (0:00:00.255) 0:09:00.258 ************ 2025-05-19 14:45:07.810786 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810791 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810796 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810801 | orchestrator | 2025-05-19 14:45:07.810807 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.810812 | orchestrator | Monday 19 May 2025 14:43:18 +0000 (0:00:00.245) 0:09:00.503 ************ 2025-05-19 14:45:07.810817 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810823 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810828 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810833 | orchestrator | 2025-05-19 14:45:07.810838 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.810844 | orchestrator | Monday 19 May 2025 14:43:18 +0000 (0:00:00.366) 0:09:00.869 ************ 2025-05-19 14:45:07.810849 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810854 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810859 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810865 | orchestrator | 2025-05-19 14:45:07.810870 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.810875 | orchestrator | Monday 19 May 2025 14:43:18 +0000 (0:00:00.210) 0:09:01.080 ************ 2025-05-19 14:45:07.810881 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.810886 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810891 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810896 | orchestrator | 2025-05-19 14:45:07.810902 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.810907 | orchestrator | Monday 19 May 2025 14:43:19 +0000 (0:00:00.254) 0:09:01.335 ************ 2025-05-19 14:45:07.810912 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810918 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810923 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810928 | orchestrator | 2025-05-19 14:45:07.810934 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.810939 | orchestrator | Monday 19 May 2025 14:43:19 +0000 (0:00:00.254) 0:09:01.589 ************ 2025-05-19 14:45:07.810949 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.810954 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.810959 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.810965 | orchestrator | 2025-05-19 14:45:07.810970 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-19 14:45:07.810975 | orchestrator | Monday 19 May 2025 14:43:19 +0000 (0:00:00.678) 0:09:02.267 ************ 2025-05-19 14:45:07.810980 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.810986 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.810991 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-19 14:45:07.810996 | orchestrator | 2025-05-19 14:45:07.811002 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-19 14:45:07.811007 | orchestrator | Monday 19 May 2025 14:43:20 +0000 (0:00:00.354) 0:09:02.621 ************ 2025-05-19 14:45:07.811012 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.811018 | orchestrator | 2025-05-19 14:45:07.811023 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-19 14:45:07.811028 | orchestrator | Monday 19 May 2025 14:43:22 +0000 (0:00:01.977) 0:09:04.599 ************ 2025-05-19 14:45:07.811034 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-19 14:45:07.811041 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.811046 | orchestrator | 2025-05-19 14:45:07.811052 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-19 14:45:07.811057 | orchestrator | Monday 19 May 2025 14:43:22 +0000 (0:00:00.160) 0:09:04.760 ************ 2025-05-19 14:45:07.811063 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:45:07.811074 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:45:07.811079 | orchestrator | 2025-05-19 14:45:07.811085 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-19 14:45:07.811090 | orchestrator | Monday 19 May 2025 14:43:31 +0000 (0:00:09.137) 0:09:13.897 ************ 2025-05-19 14:45:07.811100 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:45:07.811105 | orchestrator | 2025-05-19 14:45:07.811111 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-19 14:45:07.811116 | orchestrator | Monday 19 May 2025 14:43:35 +0000 (0:00:03.644) 0:09:17.542 ************ 2025-05-19 14:45:07.811122 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811127 | orchestrator | 2025-05-19 14:45:07.811135 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-19 14:45:07.811141 | orchestrator | Monday 19 May 2025 14:43:35 +0000 (0:00:00.595) 0:09:18.137 ************ 2025-05-19 14:45:07.811146 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 14:45:07.811152 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 14:45:07.811157 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 14:45:07.811162 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-19 14:45:07.811168 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-19 14:45:07.811173 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-19 14:45:07.811182 | orchestrator | 2025-05-19 14:45:07.811188 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-19 14:45:07.811193 | orchestrator | Monday 19 May 2025 14:43:36 +0000 (0:00:01.046) 0:09:19.183 ************ 2025-05-19 14:45:07.811199 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.811204 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.811209 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.811214 | orchestrator | 2025-05-19 14:45:07.811220 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-19 14:45:07.811225 | orchestrator | Monday 19 May 2025 14:43:39 +0000 (0:00:02.447) 0:09:21.630 ************ 2025-05-19 14:45:07.811230 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:45:07.811236 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.811241 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811246 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:45:07.811252 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 14:45:07.811257 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811262 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:45:07.811267 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 14:45:07.811273 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811278 | orchestrator | 2025-05-19 14:45:07.811283 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-19 14:45:07.811289 | orchestrator | Monday 19 May 2025 14:43:40 +0000 (0:00:01.583) 0:09:23.214 ************ 2025-05-19 14:45:07.811294 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811299 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811304 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811343 | orchestrator | 2025-05-19 14:45:07.811350 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-19 14:45:07.811355 | orchestrator | Monday 19 May 2025 14:43:43 +0000 (0:00:02.553) 0:09:25.768 ************ 2025-05-19 14:45:07.811360 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.811366 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.811371 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.811376 | orchestrator | 2025-05-19 14:45:07.811382 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-19 14:45:07.811387 | orchestrator | Monday 19 May 2025 14:43:43 +0000 (0:00:00.326) 0:09:26.094 ************ 2025-05-19 14:45:07.811392 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811398 | orchestrator | 2025-05-19 14:45:07.811403 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-19 14:45:07.811408 | orchestrator | Monday 19 May 2025 14:43:44 +0000 (0:00:00.721) 0:09:26.816 ************ 2025-05-19 14:45:07.811413 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811419 | orchestrator | 2025-05-19 14:45:07.811424 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-19 14:45:07.811429 | orchestrator | Monday 19 May 2025 14:43:44 +0000 (0:00:00.513) 0:09:27.329 ************ 2025-05-19 14:45:07.811435 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811480 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811485 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811491 | orchestrator | 2025-05-19 14:45:07.811496 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-19 14:45:07.811501 | orchestrator | Monday 19 May 2025 14:43:46 +0000 (0:00:01.242) 0:09:28.571 ************ 2025-05-19 14:45:07.811506 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811512 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811517 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811527 | orchestrator | 2025-05-19 14:45:07.811532 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-19 14:45:07.811538 | orchestrator | Monday 19 May 2025 14:43:47 +0000 (0:00:01.426) 0:09:29.998 ************ 2025-05-19 14:45:07.811543 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811548 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811553 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811559 | orchestrator | 2025-05-19 14:45:07.811564 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-19 14:45:07.811569 | orchestrator | Monday 19 May 2025 14:43:49 +0000 (0:00:01.775) 0:09:31.773 ************ 2025-05-19 14:45:07.811574 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811580 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811588 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811594 | orchestrator | 2025-05-19 14:45:07.811599 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-19 14:45:07.811604 | orchestrator | Monday 19 May 2025 14:43:51 +0000 (0:00:01.949) 0:09:33.722 ************ 2025-05-19 14:45:07.811610 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811615 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811620 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811625 | orchestrator | 2025-05-19 14:45:07.811635 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.811640 | orchestrator | Monday 19 May 2025 14:43:52 +0000 (0:00:01.446) 0:09:35.169 ************ 2025-05-19 14:45:07.811646 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811651 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811657 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811662 | orchestrator | 2025-05-19 14:45:07.811667 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-19 14:45:07.811673 | orchestrator | Monday 19 May 2025 14:43:53 +0000 (0:00:00.621) 0:09:35.790 ************ 2025-05-19 14:45:07.811678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811684 | orchestrator | 2025-05-19 14:45:07.811689 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-19 14:45:07.811694 | orchestrator | Monday 19 May 2025 14:43:54 +0000 (0:00:00.686) 0:09:36.477 ************ 2025-05-19 14:45:07.811700 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811705 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811710 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811716 | orchestrator | 2025-05-19 14:45:07.811721 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-19 14:45:07.811726 | orchestrator | Monday 19 May 2025 14:43:54 +0000 (0:00:00.308) 0:09:36.786 ************ 2025-05-19 14:45:07.811732 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.811737 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.811742 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.811748 | orchestrator | 2025-05-19 14:45:07.811753 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-19 14:45:07.811758 | orchestrator | Monday 19 May 2025 14:43:55 +0000 (0:00:01.155) 0:09:37.941 ************ 2025-05-19 14:45:07.811764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.811769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.811774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.811780 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.811785 | orchestrator | 2025-05-19 14:45:07.811790 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-19 14:45:07.811796 | orchestrator | Monday 19 May 2025 14:43:56 +0000 (0:00:01.056) 0:09:38.997 ************ 2025-05-19 14:45:07.811801 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811806 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811811 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811821 | orchestrator | 2025-05-19 14:45:07.811827 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 14:45:07.811832 | orchestrator | 2025-05-19 14:45:07.811836 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 14:45:07.811841 | orchestrator | Monday 19 May 2025 14:43:57 +0000 (0:00:00.765) 0:09:39.763 ************ 2025-05-19 14:45:07.811846 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811851 | orchestrator | 2025-05-19 14:45:07.811855 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 14:45:07.811860 | orchestrator | Monday 19 May 2025 14:43:57 +0000 (0:00:00.477) 0:09:40.240 ************ 2025-05-19 14:45:07.811865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.811870 | orchestrator | 2025-05-19 14:45:07.811874 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 14:45:07.811879 | orchestrator | Monday 19 May 2025 14:43:58 +0000 (0:00:00.669) 0:09:40.909 ************ 2025-05-19 14:45:07.811884 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.811888 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.811893 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.811898 | orchestrator | 2025-05-19 14:45:07.811903 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 14:45:07.811907 | orchestrator | Monday 19 May 2025 14:43:58 +0000 (0:00:00.272) 0:09:41.182 ************ 2025-05-19 14:45:07.811912 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811917 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811922 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811926 | orchestrator | 2025-05-19 14:45:07.811931 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 14:45:07.811936 | orchestrator | Monday 19 May 2025 14:43:59 +0000 (0:00:00.723) 0:09:41.905 ************ 2025-05-19 14:45:07.811941 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811945 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811950 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811955 | orchestrator | 2025-05-19 14:45:07.811959 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 14:45:07.811964 | orchestrator | Monday 19 May 2025 14:44:00 +0000 (0:00:00.682) 0:09:42.587 ************ 2025-05-19 14:45:07.811969 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.811974 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.811978 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.811983 | orchestrator | 2025-05-19 14:45:07.811988 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 14:45:07.811992 | orchestrator | Monday 19 May 2025 14:44:01 +0000 (0:00:01.102) 0:09:43.689 ************ 2025-05-19 14:45:07.811997 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812002 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812007 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812011 | orchestrator | 2025-05-19 14:45:07.812016 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 14:45:07.812024 | orchestrator | Monday 19 May 2025 14:44:01 +0000 (0:00:00.374) 0:09:44.064 ************ 2025-05-19 14:45:07.812028 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812033 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812038 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812043 | orchestrator | 2025-05-19 14:45:07.812047 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 14:45:07.812055 | orchestrator | Monday 19 May 2025 14:44:02 +0000 (0:00:00.319) 0:09:44.384 ************ 2025-05-19 14:45:07.812060 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812064 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812069 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812078 | orchestrator | 2025-05-19 14:45:07.812083 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 14:45:07.812087 | orchestrator | Monday 19 May 2025 14:44:02 +0000 (0:00:00.302) 0:09:44.686 ************ 2025-05-19 14:45:07.812092 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812097 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812102 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812106 | orchestrator | 2025-05-19 14:45:07.812111 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 14:45:07.812116 | orchestrator | Monday 19 May 2025 14:44:03 +0000 (0:00:01.177) 0:09:45.864 ************ 2025-05-19 14:45:07.812121 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812125 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812130 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812135 | orchestrator | 2025-05-19 14:45:07.812140 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 14:45:07.812144 | orchestrator | Monday 19 May 2025 14:44:04 +0000 (0:00:00.691) 0:09:46.556 ************ 2025-05-19 14:45:07.812149 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812154 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812158 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812163 | orchestrator | 2025-05-19 14:45:07.812168 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 14:45:07.812173 | orchestrator | Monday 19 May 2025 14:44:04 +0000 (0:00:00.292) 0:09:46.849 ************ 2025-05-19 14:45:07.812177 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812182 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812187 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812191 | orchestrator | 2025-05-19 14:45:07.812196 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 14:45:07.812201 | orchestrator | Monday 19 May 2025 14:44:04 +0000 (0:00:00.317) 0:09:47.166 ************ 2025-05-19 14:45:07.812205 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812210 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812215 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812220 | orchestrator | 2025-05-19 14:45:07.812224 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 14:45:07.812229 | orchestrator | Monday 19 May 2025 14:44:05 +0000 (0:00:00.609) 0:09:47.776 ************ 2025-05-19 14:45:07.812234 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812239 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812243 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812248 | orchestrator | 2025-05-19 14:45:07.812253 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 14:45:07.812258 | orchestrator | Monday 19 May 2025 14:44:05 +0000 (0:00:00.304) 0:09:48.081 ************ 2025-05-19 14:45:07.812262 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812267 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812272 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812276 | orchestrator | 2025-05-19 14:45:07.812281 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 14:45:07.812286 | orchestrator | Monday 19 May 2025 14:44:06 +0000 (0:00:00.290) 0:09:48.372 ************ 2025-05-19 14:45:07.812291 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812295 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812300 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812305 | orchestrator | 2025-05-19 14:45:07.812321 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 14:45:07.812326 | orchestrator | Monday 19 May 2025 14:44:06 +0000 (0:00:00.265) 0:09:48.638 ************ 2025-05-19 14:45:07.812331 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812336 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812340 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812345 | orchestrator | 2025-05-19 14:45:07.812350 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 14:45:07.812358 | orchestrator | Monday 19 May 2025 14:44:06 +0000 (0:00:00.395) 0:09:49.034 ************ 2025-05-19 14:45:07.812363 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812368 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812373 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812377 | orchestrator | 2025-05-19 14:45:07.812382 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 14:45:07.812387 | orchestrator | Monday 19 May 2025 14:44:06 +0000 (0:00:00.245) 0:09:49.280 ************ 2025-05-19 14:45:07.812391 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812396 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812401 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812406 | orchestrator | 2025-05-19 14:45:07.812410 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 14:45:07.812415 | orchestrator | Monday 19 May 2025 14:44:07 +0000 (0:00:00.281) 0:09:49.561 ************ 2025-05-19 14:45:07.812420 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.812425 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.812429 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.812434 | orchestrator | 2025-05-19 14:45:07.812439 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-19 14:45:07.812443 | orchestrator | Monday 19 May 2025 14:44:07 +0000 (0:00:00.587) 0:09:50.148 ************ 2025-05-19 14:45:07.812448 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.812453 | orchestrator | 2025-05-19 14:45:07.812458 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-19 14:45:07.812465 | orchestrator | Monday 19 May 2025 14:44:08 +0000 (0:00:00.442) 0:09:50.591 ************ 2025-05-19 14:45:07.812470 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812475 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.812480 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.812484 | orchestrator | 2025-05-19 14:45:07.812489 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-19 14:45:07.812496 | orchestrator | Monday 19 May 2025 14:44:10 +0000 (0:00:02.111) 0:09:52.703 ************ 2025-05-19 14:45:07.812501 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:45:07.812506 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 14:45:07.812511 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.812516 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:45:07.812521 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 14:45:07.812525 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.812530 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:45:07.812535 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 14:45:07.812539 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.812544 | orchestrator | 2025-05-19 14:45:07.812549 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-19 14:45:07.812554 | orchestrator | Monday 19 May 2025 14:44:11 +0000 (0:00:01.412) 0:09:54.116 ************ 2025-05-19 14:45:07.812559 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812563 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812568 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812573 | orchestrator | 2025-05-19 14:45:07.812577 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-19 14:45:07.812582 | orchestrator | Monday 19 May 2025 14:44:12 +0000 (0:00:00.298) 0:09:54.414 ************ 2025-05-19 14:45:07.812587 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.812592 | orchestrator | 2025-05-19 14:45:07.812596 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-19 14:45:07.812601 | orchestrator | Monday 19 May 2025 14:44:12 +0000 (0:00:00.506) 0:09:54.921 ************ 2025-05-19 14:45:07.812609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.812614 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.812619 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.812624 | orchestrator | 2025-05-19 14:45:07.812629 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-19 14:45:07.812634 | orchestrator | Monday 19 May 2025 14:44:13 +0000 (0:00:01.209) 0:09:56.131 ************ 2025-05-19 14:45:07.812638 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812643 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 14:45:07.812648 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812653 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 14:45:07.812657 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812662 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 14:45:07.812667 | orchestrator | 2025-05-19 14:45:07.812672 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-19 14:45:07.812676 | orchestrator | Monday 19 May 2025 14:44:18 +0000 (0:00:04.329) 0:10:00.460 ************ 2025-05-19 14:45:07.812681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812686 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.812690 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812695 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.812700 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:45:07.812705 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:45:07.812709 | orchestrator | 2025-05-19 14:45:07.812714 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-19 14:45:07.812719 | orchestrator | Monday 19 May 2025 14:44:20 +0000 (0:00:02.212) 0:10:02.673 ************ 2025-05-19 14:45:07.812723 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:45:07.812728 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.812733 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:45:07.812738 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.812742 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:45:07.812747 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.812752 | orchestrator | 2025-05-19 14:45:07.812756 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-19 14:45:07.812761 | orchestrator | Monday 19 May 2025 14:44:21 +0000 (0:00:01.176) 0:10:03.849 ************ 2025-05-19 14:45:07.812769 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-19 14:45:07.812774 | orchestrator | 2025-05-19 14:45:07.812779 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-19 14:45:07.812784 | orchestrator | Monday 19 May 2025 14:44:21 +0000 (0:00:00.218) 0:10:04.068 ************ 2025-05-19 14:45:07.812791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812819 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812823 | orchestrator | 2025-05-19 14:45:07.812828 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-19 14:45:07.812833 | orchestrator | Monday 19 May 2025 14:44:22 +0000 (0:00:00.791) 0:10:04.859 ************ 2025-05-19 14:45:07.812838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 14:45:07.812861 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812866 | orchestrator | 2025-05-19 14:45:07.812871 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-19 14:45:07.812876 | orchestrator | Monday 19 May 2025 14:44:23 +0000 (0:00:00.484) 0:10:05.343 ************ 2025-05-19 14:45:07.812881 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 14:45:07.812885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 14:45:07.812890 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 14:45:07.812895 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 14:45:07.812900 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 14:45:07.812905 | orchestrator | 2025-05-19 14:45:07.812909 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-19 14:45:07.812914 | orchestrator | Monday 19 May 2025 14:44:54 +0000 (0:00:31.198) 0:10:36.541 ************ 2025-05-19 14:45:07.812919 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812924 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812929 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812933 | orchestrator | 2025-05-19 14:45:07.812938 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-19 14:45:07.812943 | orchestrator | Monday 19 May 2025 14:44:54 +0000 (0:00:00.318) 0:10:36.860 ************ 2025-05-19 14:45:07.812947 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.812952 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.812957 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.812962 | orchestrator | 2025-05-19 14:45:07.812966 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-19 14:45:07.812974 | orchestrator | Monday 19 May 2025 14:44:54 +0000 (0:00:00.294) 0:10:37.154 ************ 2025-05-19 14:45:07.812979 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.812984 | orchestrator | 2025-05-19 14:45:07.812989 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-19 14:45:07.812994 | orchestrator | Monday 19 May 2025 14:44:55 +0000 (0:00:00.723) 0:10:37.878 ************ 2025-05-19 14:45:07.812998 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.813003 | orchestrator | 2025-05-19 14:45:07.813008 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-19 14:45:07.813015 | orchestrator | Monday 19 May 2025 14:44:56 +0000 (0:00:00.489) 0:10:38.368 ************ 2025-05-19 14:45:07.813020 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.813025 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.813029 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.813034 | orchestrator | 2025-05-19 14:45:07.813039 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-19 14:45:07.813044 | orchestrator | Monday 19 May 2025 14:44:57 +0000 (0:00:01.244) 0:10:39.612 ************ 2025-05-19 14:45:07.813051 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.813056 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.813060 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.813065 | orchestrator | 2025-05-19 14:45:07.813070 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-19 14:45:07.813074 | orchestrator | Monday 19 May 2025 14:44:58 +0000 (0:00:01.369) 0:10:40.982 ************ 2025-05-19 14:45:07.813079 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:45:07.813084 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:45:07.813088 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:45:07.813093 | orchestrator | 2025-05-19 14:45:07.813098 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-19 14:45:07.813103 | orchestrator | Monday 19 May 2025 14:45:00 +0000 (0:00:01.755) 0:10:42.738 ************ 2025-05-19 14:45:07.813107 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.813112 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.813117 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 14:45:07.813122 | orchestrator | 2025-05-19 14:45:07.813127 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 14:45:07.813131 | orchestrator | Monday 19 May 2025 14:45:02 +0000 (0:00:02.560) 0:10:45.299 ************ 2025-05-19 14:45:07.813136 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.813141 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.813145 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.813150 | orchestrator | 2025-05-19 14:45:07.813155 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-19 14:45:07.813160 | orchestrator | Monday 19 May 2025 14:45:03 +0000 (0:00:00.334) 0:10:45.634 ************ 2025-05-19 14:45:07.813164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:45:07.813169 | orchestrator | 2025-05-19 14:45:07.813174 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-19 14:45:07.813179 | orchestrator | Monday 19 May 2025 14:45:03 +0000 (0:00:00.538) 0:10:46.173 ************ 2025-05-19 14:45:07.813184 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.813188 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.813193 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.813201 | orchestrator | 2025-05-19 14:45:07.813206 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-19 14:45:07.813210 | orchestrator | Monday 19 May 2025 14:45:04 +0000 (0:00:00.552) 0:10:46.725 ************ 2025-05-19 14:45:07.813215 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.813220 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:45:07.813225 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:45:07.813229 | orchestrator | 2025-05-19 14:45:07.813234 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-19 14:45:07.813239 | orchestrator | Monday 19 May 2025 14:45:04 +0000 (0:00:00.344) 0:10:47.070 ************ 2025-05-19 14:45:07.813243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:45:07.813248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:45:07.813253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:45:07.813258 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:45:07.813262 | orchestrator | 2025-05-19 14:45:07.813267 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-19 14:45:07.813272 | orchestrator | Monday 19 May 2025 14:45:05 +0000 (0:00:00.627) 0:10:47.697 ************ 2025-05-19 14:45:07.813276 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:45:07.813281 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:45:07.813286 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:45:07.813290 | orchestrator | 2025-05-19 14:45:07.813295 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:45:07.813300 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-19 14:45:07.813305 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-19 14:45:07.813319 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-19 14:45:07.813324 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-19 14:45:07.813329 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-19 14:45:07.813338 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-19 14:45:07.813343 | orchestrator | 2025-05-19 14:45:07.813348 | orchestrator | 2025-05-19 14:45:07.813353 | orchestrator | 2025-05-19 14:45:07.813357 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:45:07.813362 | orchestrator | Monday 19 May 2025 14:45:05 +0000 (0:00:00.229) 0:10:47.927 ************ 2025-05-19 14:45:07.813367 | orchestrator | =============================================================================== 2025-05-19 14:45:07.813372 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 75.04s 2025-05-19 14:45:07.813379 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.97s 2025-05-19 14:45:07.813384 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.20s 2025-05-19 14:45:07.813389 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.18s 2025-05-19 14:45:07.813394 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.84s 2025-05-19 14:45:07.813398 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.40s 2025-05-19 14:45:07.813403 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.70s 2025-05-19 14:45:07.813408 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.08s 2025-05-19 14:45:07.813416 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.04s 2025-05-19 14:45:07.813421 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.14s 2025-05-19 14:45:07.813425 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.45s 2025-05-19 14:45:07.813430 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.28s 2025-05-19 14:45:07.813435 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.74s 2025-05-19 14:45:07.813439 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.33s 2025-05-19 14:45:07.813444 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.29s 2025-05-19 14:45:07.813449 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.94s 2025-05-19 14:45:07.813454 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.64s 2025-05-19 14:45:07.813458 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.41s 2025-05-19 14:45:07.813463 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.39s 2025-05-19 14:45:07.813468 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.24s 2025-05-19 14:45:07.813473 | orchestrator | 2025-05-19 14:45:07 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:07.813477 | orchestrator | 2025-05-19 14:45:07 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:07.813482 | orchestrator | 2025-05-19 14:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:10.842713 | orchestrator | 2025-05-19 14:45:10 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:10.845036 | orchestrator | 2025-05-19 14:45:10 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:10.846747 | orchestrator | 2025-05-19 14:45:10 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:10.846798 | orchestrator | 2025-05-19 14:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:13.888552 | orchestrator | 2025-05-19 14:45:13 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:13.888900 | orchestrator | 2025-05-19 14:45:13 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:13.889707 | orchestrator | 2025-05-19 14:45:13 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:13.889733 | orchestrator | 2025-05-19 14:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:16.940037 | orchestrator | 2025-05-19 14:45:16 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:16.941901 | orchestrator | 2025-05-19 14:45:16 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:16.943595 | orchestrator | 2025-05-19 14:45:16 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:16.943624 | orchestrator | 2025-05-19 14:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:19.992057 | orchestrator | 2025-05-19 14:45:19 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:19.992652 | orchestrator | 2025-05-19 14:45:19 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:19.994205 | orchestrator | 2025-05-19 14:45:19 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:19.994450 | orchestrator | 2025-05-19 14:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:23.043741 | orchestrator | 2025-05-19 14:45:23 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:23.045698 | orchestrator | 2025-05-19 14:45:23 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:23.047279 | orchestrator | 2025-05-19 14:45:23 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:23.047459 | orchestrator | 2025-05-19 14:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:26.102941 | orchestrator | 2025-05-19 14:45:26 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:26.104191 | orchestrator | 2025-05-19 14:45:26 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:26.105614 | orchestrator | 2025-05-19 14:45:26 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:26.105653 | orchestrator | 2025-05-19 14:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:29.141638 | orchestrator | 2025-05-19 14:45:29 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:29.143262 | orchestrator | 2025-05-19 14:45:29 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:29.144700 | orchestrator | 2025-05-19 14:45:29 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:29.144926 | orchestrator | 2025-05-19 14:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:32.191639 | orchestrator | 2025-05-19 14:45:32 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:32.194281 | orchestrator | 2025-05-19 14:45:32 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:32.196392 | orchestrator | 2025-05-19 14:45:32 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:32.196419 | orchestrator | 2025-05-19 14:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:35.246545 | orchestrator | 2025-05-19 14:45:35 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:35.248005 | orchestrator | 2025-05-19 14:45:35 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:35.250005 | orchestrator | 2025-05-19 14:45:35 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:35.250388 | orchestrator | 2025-05-19 14:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:38.307606 | orchestrator | 2025-05-19 14:45:38 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:38.309207 | orchestrator | 2025-05-19 14:45:38 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:38.311173 | orchestrator | 2025-05-19 14:45:38 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:38.311200 | orchestrator | 2025-05-19 14:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:41.365988 | orchestrator | 2025-05-19 14:45:41 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:41.367316 | orchestrator | 2025-05-19 14:45:41 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:41.368929 | orchestrator | 2025-05-19 14:45:41 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:41.369042 | orchestrator | 2025-05-19 14:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:44.420522 | orchestrator | 2025-05-19 14:45:44 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:44.421906 | orchestrator | 2025-05-19 14:45:44 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:44.425292 | orchestrator | 2025-05-19 14:45:44 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:44.425337 | orchestrator | 2025-05-19 14:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:47.470224 | orchestrator | 2025-05-19 14:45:47 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:47.471501 | orchestrator | 2025-05-19 14:45:47 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:47.472619 | orchestrator | 2025-05-19 14:45:47 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:47.472643 | orchestrator | 2025-05-19 14:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:50.511890 | orchestrator | 2025-05-19 14:45:50 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:50.513815 | orchestrator | 2025-05-19 14:45:50 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:50.515877 | orchestrator | 2025-05-19 14:45:50 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:50.516143 | orchestrator | 2025-05-19 14:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:53.564489 | orchestrator | 2025-05-19 14:45:53 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:53.566302 | orchestrator | 2025-05-19 14:45:53 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:53.567680 | orchestrator | 2025-05-19 14:45:53 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:53.567722 | orchestrator | 2025-05-19 14:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:56.612332 | orchestrator | 2025-05-19 14:45:56 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:56.612622 | orchestrator | 2025-05-19 14:45:56 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:56.613350 | orchestrator | 2025-05-19 14:45:56 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:56.613383 | orchestrator | 2025-05-19 14:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:45:59.659627 | orchestrator | 2025-05-19 14:45:59 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:45:59.664111 | orchestrator | 2025-05-19 14:45:59 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:45:59.669198 | orchestrator | 2025-05-19 14:45:59 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:45:59.669242 | orchestrator | 2025-05-19 14:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:02.707401 | orchestrator | 2025-05-19 14:46:02 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:02.708542 | orchestrator | 2025-05-19 14:46:02 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:46:02.709895 | orchestrator | 2025-05-19 14:46:02 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:46:02.710161 | orchestrator | 2025-05-19 14:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:05.751113 | orchestrator | 2025-05-19 14:46:05 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:05.752295 | orchestrator | 2025-05-19 14:46:05 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:46:05.753349 | orchestrator | 2025-05-19 14:46:05 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state STARTED 2025-05-19 14:46:05.753380 | orchestrator | 2025-05-19 14:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:08.815126 | orchestrator | 2025-05-19 14:46:08 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:08.816993 | orchestrator | 2025-05-19 14:46:08 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:08.819618 | orchestrator | 2025-05-19 14:46:08 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state STARTED 2025-05-19 14:46:08.821800 | orchestrator | 2025-05-19 14:46:08 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:08.825410 | orchestrator | 2025-05-19 14:46:08 | INFO  | Task 6eab7744-8ddf-42e6-92e3-6cb2f6f046a4 is in state SUCCESS 2025-05-19 14:46:08.828270 | orchestrator | 2025-05-19 14:46:08.828313 | orchestrator | 2025-05-19 14:46:08.828326 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-19 14:46:08.828338 | orchestrator | 2025-05-19 14:46:08.828350 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 14:46:08.828361 | orchestrator | Monday 19 May 2025 14:43:03 +0000 (0:00:00.080) 0:00:00.080 ************ 2025-05-19 14:46:08.828373 | orchestrator | ok: [localhost] => { 2025-05-19 14:46:08.828386 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-19 14:46:08.828397 | orchestrator | } 2025-05-19 14:46:08.828409 | orchestrator | 2025-05-19 14:46:08.828420 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-19 14:46:08.828458 | orchestrator | Monday 19 May 2025 14:43:03 +0000 (0:00:00.034) 0:00:00.115 ************ 2025-05-19 14:46:08.828571 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-19 14:46:08.828584 | orchestrator | ...ignoring 2025-05-19 14:46:08.828595 | orchestrator | 2025-05-19 14:46:08.828800 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-19 14:46:08.828822 | orchestrator | Monday 19 May 2025 14:43:05 +0000 (0:00:02.717) 0:00:02.832 ************ 2025-05-19 14:46:08.828834 | orchestrator | skipping: [localhost] 2025-05-19 14:46:08.828845 | orchestrator | 2025-05-19 14:46:08.828856 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-19 14:46:08.828867 | orchestrator | Monday 19 May 2025 14:43:05 +0000 (0:00:00.039) 0:00:02.872 ************ 2025-05-19 14:46:08.828879 | orchestrator | ok: [localhost] 2025-05-19 14:46:08.828890 | orchestrator | 2025-05-19 14:46:08.828901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:46:08.828912 | orchestrator | 2025-05-19 14:46:08.828923 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:46:08.828935 | orchestrator | Monday 19 May 2025 14:43:05 +0000 (0:00:00.136) 0:00:03.008 ************ 2025-05-19 14:46:08.828946 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.828957 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.828968 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.828979 | orchestrator | 2025-05-19 14:46:08.829005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:46:08.829016 | orchestrator | Monday 19 May 2025 14:43:06 +0000 (0:00:00.269) 0:00:03.278 ************ 2025-05-19 14:46:08.829027 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 14:46:08.829039 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 14:46:08.829050 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 14:46:08.829061 | orchestrator | 2025-05-19 14:46:08.829072 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 14:46:08.829104 | orchestrator | 2025-05-19 14:46:08.829116 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 14:46:08.829127 | orchestrator | Monday 19 May 2025 14:43:06 +0000 (0:00:00.596) 0:00:03.874 ************ 2025-05-19 14:46:08.829138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 14:46:08.829150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 14:46:08.829161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 14:46:08.829172 | orchestrator | 2025-05-19 14:46:08.829183 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 14:46:08.829194 | orchestrator | Monday 19 May 2025 14:43:07 +0000 (0:00:00.402) 0:00:04.276 ************ 2025-05-19 14:46:08.829205 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:08.829217 | orchestrator | 2025-05-19 14:46:08.829228 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-19 14:46:08.829239 | orchestrator | Monday 19 May 2025 14:43:07 +0000 (0:00:00.441) 0:00:04.718 ************ 2025-05-19 14:46:08.829275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829333 | orchestrator | 2025-05-19 14:46:08.829352 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-19 14:46:08.829364 | orchestrator | Monday 19 May 2025 14:43:10 +0000 (0:00:03.321) 0:00:08.040 ************ 2025-05-19 14:46:08.829375 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.829388 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.829398 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.829409 | orchestrator | 2025-05-19 14:46:08.829422 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-19 14:46:08.829466 | orchestrator | Monday 19 May 2025 14:43:11 +0000 (0:00:00.614) 0:00:08.655 ************ 2025-05-19 14:46:08.829478 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.829491 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.829503 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.829519 | orchestrator | 2025-05-19 14:46:08.829532 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-19 14:46:08.829544 | orchestrator | Monday 19 May 2025 14:43:13 +0000 (0:00:01.573) 0:00:10.229 ************ 2025-05-19 14:46:08.829564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.829638 | orchestrator | 2025-05-19 14:46:08.829651 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-19 14:46:08.829664 | orchestrator | Monday 19 May 2025 14:43:18 +0000 (0:00:04.944) 0:00:15.173 ************ 2025-05-19 14:46:08.829676 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.829689 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.829701 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.829714 | orchestrator | 2025-05-19 14:46:08.829727 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-19 14:46:08.829739 | orchestrator | Monday 19 May 2025 14:43:19 +0000 (0:00:01.065) 0:00:16.238 ************ 2025-05-19 14:46:08.829751 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:08.829763 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.829776 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:08.829788 | orchestrator | 2025-05-19 14:46:08.829799 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 14:46:08.829809 | orchestrator | Monday 19 May 2025 14:43:22 +0000 (0:00:03.405) 0:00:19.644 ************ 2025-05-19 14:46:08.829820 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:08.829831 | orchestrator | 2025-05-19 14:46:08.829842 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-19 14:46:08.829852 | orchestrator | Monday 19 May 2025 14:43:23 +0000 (0:00:00.515) 0:00:20.159 ************ 2025-05-19 14:46:08.829873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.829895 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.829907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.829919 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.829970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.829991 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.830002 | orchestrator | 2025-05-19 14:46:08.830013 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-19 14:46:08.830079 | orchestrator | Monday 19 May 2025 14:43:25 +0000 (0:00:02.140) 0:00:22.299 ************ 2025-05-19 14:46:08.830096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830108 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.830128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830149 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.830166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830178 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.830189 | orchestrator | 2025-05-19 14:46:08.830200 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-19 14:46:08.830210 | orchestrator | Monday 19 May 2025 14:43:27 +0000 (0:00:02.367) 0:00:24.667 ************ 2025-05-19 14:46:08.830222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830248 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.830274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830287 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.830298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 14:46:08.830310 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.830321 | orchestrator | 2025-05-19 14:46:08.830332 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-19 14:46:08.830350 | orchestrator | Monday 19 May 2025 14:43:30 +0000 (0:00:02.778) 0:00:27.446 ************ 2025-05-19 14:46:08.830376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.830390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.830416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 14:46:08.830458 | orchestrator | 2025-05-19 14:46:08.830471 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-19 14:46:08.830482 | orchestrator | Monday 19 May 2025 14:43:33 +0000 (0:00:03.109) 0:00:30.555 ************ 2025-05-19 14:46:08.830493 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.830504 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:08.830514 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:08.830525 | orchestrator | 2025-05-19 14:46:08.830536 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-19 14:46:08.830547 | orchestrator | Monday 19 May 2025 14:43:34 +0000 (0:00:01.012) 0:00:31.568 ************ 2025-05-19 14:46:08.830558 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.830569 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.830580 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.830591 | orchestrator | 2025-05-19 14:46:08.830602 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-19 14:46:08.830613 | orchestrator | Monday 19 May 2025 14:43:34 +0000 (0:00:00.332) 0:00:31.900 ************ 2025-05-19 14:46:08.830623 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.830634 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.830645 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.830656 | orchestrator | 2025-05-19 14:46:08.830667 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-19 14:46:08.830678 | orchestrator | Monday 19 May 2025 14:43:35 +0000 (0:00:00.312) 0:00:32.212 ************ 2025-05-19 14:46:08.830690 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-19 14:46:08.830701 | orchestrator | ...ignoring 2025-05-19 14:46:08.830713 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-19 14:46:08.830724 | orchestrator | ...ignoring 2025-05-19 14:46:08.830735 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-19 14:46:08.830745 | orchestrator | ...ignoring 2025-05-19 14:46:08.830756 | orchestrator | 2025-05-19 14:46:08.830767 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-19 14:46:08.830778 | orchestrator | Monday 19 May 2025 14:43:46 +0000 (0:00:10.960) 0:00:43.172 ************ 2025-05-19 14:46:08.830789 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.830806 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.830817 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.830828 | orchestrator | 2025-05-19 14:46:08.830839 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-19 14:46:08.830850 | orchestrator | Monday 19 May 2025 14:43:46 +0000 (0:00:00.745) 0:00:43.918 ************ 2025-05-19 14:46:08.830861 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.830872 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.830882 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.830893 | orchestrator | 2025-05-19 14:46:08.830904 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-19 14:46:08.830915 | orchestrator | Monday 19 May 2025 14:43:47 +0000 (0:00:00.505) 0:00:44.423 ************ 2025-05-19 14:46:08.830926 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.830937 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.830947 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.830958 | orchestrator | 2025-05-19 14:46:08.830969 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-19 14:46:08.830980 | orchestrator | Monday 19 May 2025 14:43:47 +0000 (0:00:00.564) 0:00:44.987 ************ 2025-05-19 14:46:08.830990 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.831001 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.831012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.831023 | orchestrator | 2025-05-19 14:46:08.831033 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-19 14:46:08.831044 | orchestrator | Monday 19 May 2025 14:43:48 +0000 (0:00:00.434) 0:00:45.422 ************ 2025-05-19 14:46:08.831055 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.831066 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.831076 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.831087 | orchestrator | 2025-05-19 14:46:08.831098 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-19 14:46:08.831109 | orchestrator | Monday 19 May 2025 14:43:48 +0000 (0:00:00.576) 0:00:45.999 ************ 2025-05-19 14:46:08.831150 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.831162 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.831173 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.831184 | orchestrator | 2025-05-19 14:46:08.831194 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 14:46:08.831205 | orchestrator | Monday 19 May 2025 14:43:49 +0000 (0:00:00.383) 0:00:46.382 ************ 2025-05-19 14:46:08.831216 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.831227 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.831238 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-19 14:46:08.831249 | orchestrator | 2025-05-19 14:46:08.831260 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-19 14:46:08.831271 | orchestrator | Monday 19 May 2025 14:43:49 +0000 (0:00:00.364) 0:00:46.747 ************ 2025-05-19 14:46:08.831281 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.831292 | orchestrator | 2025-05-19 14:46:08.831303 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-19 14:46:08.831314 | orchestrator | Monday 19 May 2025 14:43:59 +0000 (0:00:09.871) 0:00:56.618 ************ 2025-05-19 14:46:08.831325 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.831336 | orchestrator | 2025-05-19 14:46:08.831352 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 14:46:08.831363 | orchestrator | Monday 19 May 2025 14:43:59 +0000 (0:00:00.153) 0:00:56.771 ************ 2025-05-19 14:46:08.831374 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.831384 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.831395 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.831406 | orchestrator | 2025-05-19 14:46:08.831416 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-19 14:46:08.831488 | orchestrator | Monday 19 May 2025 14:44:00 +0000 (0:00:01.071) 0:00:57.843 ************ 2025-05-19 14:46:08.831501 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.831511 | orchestrator | 2025-05-19 14:46:08.831522 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-19 14:46:08.831533 | orchestrator | Monday 19 May 2025 14:44:08 +0000 (0:00:07.312) 0:01:05.155 ************ 2025-05-19 14:46:08.831544 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.831555 | orchestrator | 2025-05-19 14:46:08.831566 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-19 14:46:08.831576 | orchestrator | Monday 19 May 2025 14:44:09 +0000 (0:00:01.619) 0:01:06.775 ************ 2025-05-19 14:46:08.831587 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.831596 | orchestrator | 2025-05-19 14:46:08.831606 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-19 14:46:08.831615 | orchestrator | Monday 19 May 2025 14:44:12 +0000 (0:00:02.434) 0:01:09.210 ************ 2025-05-19 14:46:08.831625 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.831634 | orchestrator | 2025-05-19 14:46:08.831643 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-19 14:46:08.831653 | orchestrator | Monday 19 May 2025 14:44:12 +0000 (0:00:00.122) 0:01:09.333 ************ 2025-05-19 14:46:08.831662 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.831672 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.831681 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.831691 | orchestrator | 2025-05-19 14:46:08.831701 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-19 14:46:08.831710 | orchestrator | Monday 19 May 2025 14:44:12 +0000 (0:00:00.476) 0:01:09.809 ************ 2025-05-19 14:46:08.831720 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.831729 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 14:46:08.831739 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:08.831749 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:08.831758 | orchestrator | 2025-05-19 14:46:08.831767 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 14:46:08.831777 | orchestrator | skipping: no hosts matched 2025-05-19 14:46:08.831786 | orchestrator | 2025-05-19 14:46:08.831796 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 14:46:08.831805 | orchestrator | 2025-05-19 14:46:08.831815 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 14:46:08.831824 | orchestrator | Monday 19 May 2025 14:44:13 +0000 (0:00:00.354) 0:01:10.164 ************ 2025-05-19 14:46:08.831834 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:08.831843 | orchestrator | 2025-05-19 14:46:08.831853 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 14:46:08.831862 | orchestrator | Monday 19 May 2025 14:44:32 +0000 (0:00:19.138) 0:01:29.302 ************ 2025-05-19 14:46:08.831872 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.831882 | orchestrator | 2025-05-19 14:46:08.831891 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 14:46:08.831901 | orchestrator | Monday 19 May 2025 14:44:52 +0000 (0:00:20.545) 0:01:49.848 ************ 2025-05-19 14:46:08.831910 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.831920 | orchestrator | 2025-05-19 14:46:08.831929 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 14:46:08.831939 | orchestrator | 2025-05-19 14:46:08.831948 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 14:46:08.831958 | orchestrator | Monday 19 May 2025 14:44:55 +0000 (0:00:02.372) 0:01:52.220 ************ 2025-05-19 14:46:08.831967 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:08.831977 | orchestrator | 2025-05-19 14:46:08.831986 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 14:46:08.831996 | orchestrator | Monday 19 May 2025 14:45:19 +0000 (0:00:24.071) 0:02:16.292 ************ 2025-05-19 14:46:08.832017 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.832027 | orchestrator | 2025-05-19 14:46:08.832036 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 14:46:08.832046 | orchestrator | Monday 19 May 2025 14:45:34 +0000 (0:00:15.531) 0:02:31.823 ************ 2025-05-19 14:46:08.832055 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.832065 | orchestrator | 2025-05-19 14:46:08.832074 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 14:46:08.832084 | orchestrator | 2025-05-19 14:46:08.832099 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 14:46:08.832109 | orchestrator | Monday 19 May 2025 14:45:37 +0000 (0:00:02.561) 0:02:34.384 ************ 2025-05-19 14:46:08.832119 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.832128 | orchestrator | 2025-05-19 14:46:08.832138 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 14:46:08.832147 | orchestrator | Monday 19 May 2025 14:45:47 +0000 (0:00:10.585) 0:02:44.970 ************ 2025-05-19 14:46:08.832157 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.832166 | orchestrator | 2025-05-19 14:46:08.832176 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 14:46:08.832185 | orchestrator | Monday 19 May 2025 14:45:52 +0000 (0:00:04.524) 0:02:49.494 ************ 2025-05-19 14:46:08.832195 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.832204 | orchestrator | 2025-05-19 14:46:08.832214 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 14:46:08.832224 | orchestrator | 2025-05-19 14:46:08.832233 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 14:46:08.832243 | orchestrator | Monday 19 May 2025 14:45:54 +0000 (0:00:02.371) 0:02:51.866 ************ 2025-05-19 14:46:08.832257 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:08.832267 | orchestrator | 2025-05-19 14:46:08.832277 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-19 14:46:08.832286 | orchestrator | Monday 19 May 2025 14:45:55 +0000 (0:00:00.546) 0:02:52.413 ************ 2025-05-19 14:46:08.832296 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.832306 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.832315 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.832325 | orchestrator | 2025-05-19 14:46:08.832334 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-19 14:46:08.832344 | orchestrator | Monday 19 May 2025 14:45:57 +0000 (0:00:02.325) 0:02:54.738 ************ 2025-05-19 14:46:08.832353 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.832363 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.832373 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.832382 | orchestrator | 2025-05-19 14:46:08.832392 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-19 14:46:08.832401 | orchestrator | Monday 19 May 2025 14:45:59 +0000 (0:00:01.962) 0:02:56.701 ************ 2025-05-19 14:46:08.832411 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.832420 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.832486 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.832496 | orchestrator | 2025-05-19 14:46:08.832506 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-19 14:46:08.832515 | orchestrator | Monday 19 May 2025 14:46:01 +0000 (0:00:02.101) 0:02:58.802 ************ 2025-05-19 14:46:08.832525 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.832534 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.832544 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:08.832553 | orchestrator | 2025-05-19 14:46:08.832563 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-19 14:46:08.832572 | orchestrator | Monday 19 May 2025 14:46:03 +0000 (0:00:02.084) 0:03:00.887 ************ 2025-05-19 14:46:08.832582 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:08.832598 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:08.832608 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:08.832618 | orchestrator | 2025-05-19 14:46:08.832627 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 14:46:08.832637 | orchestrator | Monday 19 May 2025 14:46:06 +0000 (0:00:02.791) 0:03:03.679 ************ 2025-05-19 14:46:08.832646 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:08.832655 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:08.832665 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:08.832675 | orchestrator | 2025-05-19 14:46:08.832684 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:46:08.832694 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 14:46:08.832704 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-19 14:46:08.832715 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-19 14:46:08.832724 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-19 14:46:08.832734 | orchestrator | 2025-05-19 14:46:08.832744 | orchestrator | 2025-05-19 14:46:08.832753 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:46:08.832763 | orchestrator | Monday 19 May 2025 14:46:06 +0000 (0:00:00.217) 0:03:03.897 ************ 2025-05-19 14:46:08.832772 | orchestrator | =============================================================================== 2025-05-19 14:46:08.832781 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.21s 2025-05-19 14:46:08.832791 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.08s 2025-05-19 14:46:08.832800 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-05-19 14:46:08.832810 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.59s 2025-05-19 14:46:08.832819 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.87s 2025-05-19 14:46:08.832829 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.31s 2025-05-19 14:46:08.832844 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.94s 2025-05-19 14:46:08.832854 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.93s 2025-05-19 14:46:08.832863 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.52s 2025-05-19 14:46:08.832873 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.41s 2025-05-19 14:46:08.832882 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.32s 2025-05-19 14:46:08.832892 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.11s 2025-05-19 14:46:08.832901 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.79s 2025-05-19 14:46:08.832911 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.78s 2025-05-19 14:46:08.832920 | orchestrator | Check MariaDB service --------------------------------------------------- 2.72s 2025-05-19 14:46:08.832930 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.43s 2025-05-19 14:46:08.832939 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.37s 2025-05-19 14:46:08.832954 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.37s 2025-05-19 14:46:08.832963 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2025-05-19 14:46:08.832973 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.14s 2025-05-19 14:46:08.832988 | orchestrator | 2025-05-19 14:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:11.875370 | orchestrator | 2025-05-19 14:46:11 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:11.876233 | orchestrator | 2025-05-19 14:46:11 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:11.881488 | orchestrator | 2025-05-19 14:46:11 | INFO  | Task b142a1ea-acb6-4e19-822a-e9c45680f266 is in state SUCCESS 2025-05-19 14:46:11.881535 | orchestrator | 2025-05-19 14:46:11 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:11.881547 | orchestrator | 2025-05-19 14:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:11.882844 | orchestrator | 2025-05-19 14:46:11.882932 | orchestrator | 2025-05-19 14:46:11.882945 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:46:11.882956 | orchestrator | 2025-05-19 14:46:11.882966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:46:11.882976 | orchestrator | Monday 19 May 2025 14:43:03 +0000 (0:00:00.255) 0:00:00.255 ************ 2025-05-19 14:46:11.883115 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:11.883128 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:46:11.883138 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:46:11.883148 | orchestrator | 2025-05-19 14:46:11.883158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:46:11.883167 | orchestrator | Monday 19 May 2025 14:43:03 +0000 (0:00:00.252) 0:00:00.507 ************ 2025-05-19 14:46:11.883177 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-19 14:46:11.883188 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-19 14:46:11.883197 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-19 14:46:11.883207 | orchestrator | 2025-05-19 14:46:11.883217 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-19 14:46:11.883226 | orchestrator | 2025-05-19 14:46:11.883236 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 14:46:11.883245 | orchestrator | Monday 19 May 2025 14:43:03 +0000 (0:00:00.309) 0:00:00.817 ************ 2025-05-19 14:46:11.883255 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:11.883265 | orchestrator | 2025-05-19 14:46:11.883275 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-19 14:46:11.883285 | orchestrator | Monday 19 May 2025 14:43:04 +0000 (0:00:00.349) 0:00:01.167 ************ 2025-05-19 14:46:11.883294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:46:11.883304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:46:11.883313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 14:46:11.883323 | orchestrator | 2025-05-19 14:46:11.883332 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-19 14:46:11.883342 | orchestrator | Monday 19 May 2025 14:43:04 +0000 (0:00:00.565) 0:00:01.732 ************ 2025-05-19 14:46:11.883355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883521 | orchestrator | 2025-05-19 14:46:11.883532 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 14:46:11.883541 | orchestrator | Monday 19 May 2025 14:43:05 +0000 (0:00:01.387) 0:00:03.119 ************ 2025-05-19 14:46:11.883551 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:11.883561 | orchestrator | 2025-05-19 14:46:11.883570 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-19 14:46:11.883580 | orchestrator | Monday 19 May 2025 14:43:06 +0000 (0:00:00.471) 0:00:03.591 ************ 2025-05-19 14:46:11.883601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.883657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.883731 | orchestrator | 2025-05-19 14:46:11.883747 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-19 14:46:11.883765 | orchestrator | Monday 19 May 2025 14:43:08 +0000 (0:00:02.527) 0:00:06.119 ************ 2025-05-19 14:46:11.883782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.883820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.883841 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:11.883853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.883873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.883886 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:11.883897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.883916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.883927 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:11.883938 | orchestrator | 2025-05-19 14:46:11.883949 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-19 14:46:11.883960 | orchestrator | Monday 19 May 2025 14:43:10 +0000 (0:00:01.644) 0:00:07.764 ************ 2025-05-19 14:46:11.883976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.883995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.884006 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:11.884018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.884040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.884051 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:11.884067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 14:46:11.884088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 14:46:11.884100 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:11.884110 | orchestrator | 2025-05-19 14:46:11.884121 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-19 14:46:11.884132 | orchestrator | Monday 19 May 2025 14:43:11 +0000 (0:00:00.894) 0:00:08.658 ************ 2025-05-19 14:46:11.884143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884239 | orchestrator | 2025-05-19 14:46:11.884250 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-19 14:46:11.884260 | orchestrator | Monday 19 May 2025 14:43:14 +0000 (0:00:02.593) 0:00:11.252 ************ 2025-05-19 14:46:11.884271 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.884282 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:11.884293 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:11.884304 | orchestrator | 2025-05-19 14:46:11.884315 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-19 14:46:11.884325 | orchestrator | Monday 19 May 2025 14:43:18 +0000 (0:00:04.054) 0:00:15.306 ************ 2025-05-19 14:46:11.884336 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.884346 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:11.884357 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:11.884368 | orchestrator | 2025-05-19 14:46:11.884378 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-19 14:46:11.884389 | orchestrator | Monday 19 May 2025 14:43:19 +0000 (0:00:01.466) 0:00:16.773 ************ 2025-05-19 14:46:11.884405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 14:46:11.884492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 14:46:11.884548 | orchestrator | 2025-05-19 14:46:11.884559 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 14:46:11.884570 | orchestrator | Monday 19 May 2025 14:43:21 +0000 (0:00:01.836) 0:00:18.609 ************ 2025-05-19 14:46:11.884581 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:11.884591 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:46:11.884602 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:46:11.884613 | orchestrator | 2025-05-19 14:46:11.884623 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 14:46:11.884634 | orchestrator | Monday 19 May 2025 14:43:21 +0000 (0:00:00.238) 0:00:18.847 ************ 2025-05-19 14:46:11.884645 | orchestrator | 2025-05-19 14:46:11.884655 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 14:46:11.884666 | orchestrator | Monday 19 May 2025 14:43:21 +0000 (0:00:00.059) 0:00:18.907 ************ 2025-05-19 14:46:11.884677 | orchestrator | 2025-05-19 14:46:11.884687 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 14:46:11.884698 | orchestrator | Monday 19 May 2025 14:43:21 +0000 (0:00:00.062) 0:00:18.969 ************ 2025-05-19 14:46:11.884709 | orchestrator | 2025-05-19 14:46:11.884719 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-19 14:46:11.884730 | orchestrator | Monday 19 May 2025 14:43:21 +0000 (0:00:00.174) 0:00:19.144 ************ 2025-05-19 14:46:11.884741 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:11.884751 | orchestrator | 2025-05-19 14:46:11.884762 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-19 14:46:11.884773 | orchestrator | Monday 19 May 2025 14:43:22 +0000 (0:00:00.181) 0:00:19.325 ************ 2025-05-19 14:46:11.884783 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:46:11.884794 | orchestrator | 2025-05-19 14:46:11.884805 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-19 14:46:11.884815 | orchestrator | Monday 19 May 2025 14:43:22 +0000 (0:00:00.186) 0:00:19.511 ************ 2025-05-19 14:46:11.884826 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.884836 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:11.884847 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:11.884857 | orchestrator | 2025-05-19 14:46:11.884868 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-19 14:46:11.884879 | orchestrator | Monday 19 May 2025 14:44:36 +0000 (0:01:14.352) 0:01:33.864 ************ 2025-05-19 14:46:11.884889 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.884900 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:46:11.884911 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:46:11.884921 | orchestrator | 2025-05-19 14:46:11.884932 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 14:46:11.884943 | orchestrator | Monday 19 May 2025 14:45:59 +0000 (0:01:22.747) 0:02:56.611 ************ 2025-05-19 14:46:11.884953 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:46:11.884964 | orchestrator | 2025-05-19 14:46:11.884975 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-19 14:46:11.884985 | orchestrator | Monday 19 May 2025 14:46:00 +0000 (0:00:00.633) 0:02:57.245 ************ 2025-05-19 14:46:11.884996 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:11.885007 | orchestrator | 2025-05-19 14:46:11.885017 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-19 14:46:11.885028 | orchestrator | Monday 19 May 2025 14:46:02 +0000 (0:00:02.313) 0:02:59.559 ************ 2025-05-19 14:46:11.885046 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:46:11.885057 | orchestrator | 2025-05-19 14:46:11.885068 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-19 14:46:11.885083 | orchestrator | Monday 19 May 2025 14:46:04 +0000 (0:00:02.039) 0:03:01.598 ************ 2025-05-19 14:46:11.885094 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.885105 | orchestrator | 2025-05-19 14:46:11.885115 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-19 14:46:11.885126 | orchestrator | Monday 19 May 2025 14:46:07 +0000 (0:00:02.696) 0:03:04.295 ************ 2025-05-19 14:46:11.885137 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:46:11.885147 | orchestrator | 2025-05-19 14:46:11.885158 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:46:11.885170 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:46:11.885182 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:46:11.885193 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 14:46:11.885204 | orchestrator | 2025-05-19 14:46:11.885214 | orchestrator | 2025-05-19 14:46:11.885225 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:46:11.885242 | orchestrator | Monday 19 May 2025 14:46:09 +0000 (0:00:02.399) 0:03:06.694 ************ 2025-05-19 14:46:11.885253 | orchestrator | =============================================================================== 2025-05-19 14:46:11.885264 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.75s 2025-05-19 14:46:11.885274 | orchestrator | opensearch : Restart opensearch container ------------------------------ 74.35s 2025-05-19 14:46:11.885285 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.05s 2025-05-19 14:46:11.885296 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-05-19 14:46:11.885306 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.59s 2025-05-19 14:46:11.885317 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.53s 2025-05-19 14:46:11.885328 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.40s 2025-05-19 14:46:11.885338 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.31s 2025-05-19 14:46:11.885349 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.04s 2025-05-19 14:46:11.885360 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.84s 2025-05-19 14:46:11.885370 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.64s 2025-05-19 14:46:11.885381 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.47s 2025-05-19 14:46:11.885392 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.39s 2025-05-19 14:46:11.885402 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.89s 2025-05-19 14:46:11.885413 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.63s 2025-05-19 14:46:11.885424 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.57s 2025-05-19 14:46:11.885487 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-05-19 14:46:11.885501 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.35s 2025-05-19 14:46:11.885512 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-05-19 14:46:11.885523 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.30s 2025-05-19 14:46:14.918593 | orchestrator | 2025-05-19 14:46:14 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:14.919746 | orchestrator | 2025-05-19 14:46:14 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:14.922880 | orchestrator | 2025-05-19 14:46:14 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:14.922923 | orchestrator | 2025-05-19 14:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:17.964254 | orchestrator | 2025-05-19 14:46:17 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:17.967138 | orchestrator | 2025-05-19 14:46:17 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:17.970523 | orchestrator | 2025-05-19 14:46:17 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:17.970555 | orchestrator | 2025-05-19 14:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:21.026734 | orchestrator | 2025-05-19 14:46:21 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:21.028092 | orchestrator | 2025-05-19 14:46:21 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:21.028361 | orchestrator | 2025-05-19 14:46:21 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:21.028386 | orchestrator | 2025-05-19 14:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:24.073941 | orchestrator | 2025-05-19 14:46:24 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:24.077786 | orchestrator | 2025-05-19 14:46:24 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:24.080582 | orchestrator | 2025-05-19 14:46:24 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:24.080610 | orchestrator | 2025-05-19 14:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:27.124708 | orchestrator | 2025-05-19 14:46:27 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:27.124829 | orchestrator | 2025-05-19 14:46:27 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:27.125265 | orchestrator | 2025-05-19 14:46:27 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:27.125300 | orchestrator | 2025-05-19 14:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:30.172937 | orchestrator | 2025-05-19 14:46:30 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:30.173042 | orchestrator | 2025-05-19 14:46:30 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:30.173056 | orchestrator | 2025-05-19 14:46:30 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:30.173066 | orchestrator | 2025-05-19 14:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:33.212305 | orchestrator | 2025-05-19 14:46:33 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:33.212415 | orchestrator | 2025-05-19 14:46:33 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:33.213300 | orchestrator | 2025-05-19 14:46:33 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:33.213333 | orchestrator | 2025-05-19 14:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:36.256848 | orchestrator | 2025-05-19 14:46:36 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:36.258130 | orchestrator | 2025-05-19 14:46:36 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:36.261510 | orchestrator | 2025-05-19 14:46:36 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:36.261555 | orchestrator | 2025-05-19 14:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:39.294434 | orchestrator | 2025-05-19 14:46:39 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:39.297263 | orchestrator | 2025-05-19 14:46:39 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:39.297967 | orchestrator | 2025-05-19 14:46:39 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:39.297998 | orchestrator | 2025-05-19 14:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:42.355828 | orchestrator | 2025-05-19 14:46:42 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:42.358077 | orchestrator | 2025-05-19 14:46:42 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:42.360410 | orchestrator | 2025-05-19 14:46:42 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:42.360720 | orchestrator | 2025-05-19 14:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:45.407031 | orchestrator | 2025-05-19 14:46:45 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:45.407176 | orchestrator | 2025-05-19 14:46:45 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:45.407797 | orchestrator | 2025-05-19 14:46:45 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:45.407973 | orchestrator | 2025-05-19 14:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:48.443949 | orchestrator | 2025-05-19 14:46:48 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:48.445626 | orchestrator | 2025-05-19 14:46:48 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:48.447320 | orchestrator | 2025-05-19 14:46:48 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:48.447350 | orchestrator | 2025-05-19 14:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:51.490087 | orchestrator | 2025-05-19 14:46:51 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:51.490187 | orchestrator | 2025-05-19 14:46:51 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:51.492480 | orchestrator | 2025-05-19 14:46:51 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:51.492595 | orchestrator | 2025-05-19 14:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:54.540675 | orchestrator | 2025-05-19 14:46:54 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:54.542389 | orchestrator | 2025-05-19 14:46:54 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:54.544189 | orchestrator | 2025-05-19 14:46:54 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:54.544215 | orchestrator | 2025-05-19 14:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:46:57.583294 | orchestrator | 2025-05-19 14:46:57 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:46:57.583868 | orchestrator | 2025-05-19 14:46:57 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:46:57.586700 | orchestrator | 2025-05-19 14:46:57 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:46:57.587031 | orchestrator | 2025-05-19 14:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:00.643121 | orchestrator | 2025-05-19 14:47:00 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:00.644926 | orchestrator | 2025-05-19 14:47:00 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:00.648833 | orchestrator | 2025-05-19 14:47:00 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:00.648874 | orchestrator | 2025-05-19 14:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:03.704093 | orchestrator | 2025-05-19 14:47:03 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:03.706378 | orchestrator | 2025-05-19 14:47:03 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:03.707894 | orchestrator | 2025-05-19 14:47:03 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:03.707927 | orchestrator | 2025-05-19 14:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:06.759988 | orchestrator | 2025-05-19 14:47:06 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:06.761328 | orchestrator | 2025-05-19 14:47:06 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:06.762944 | orchestrator | 2025-05-19 14:47:06 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:06.763203 | orchestrator | 2025-05-19 14:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:09.813802 | orchestrator | 2025-05-19 14:47:09 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:09.814431 | orchestrator | 2025-05-19 14:47:09 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:09.815603 | orchestrator | 2025-05-19 14:47:09 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:09.815630 | orchestrator | 2025-05-19 14:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:12.861889 | orchestrator | 2025-05-19 14:47:12 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:12.864176 | orchestrator | 2025-05-19 14:47:12 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:12.865749 | orchestrator | 2025-05-19 14:47:12 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:12.865780 | orchestrator | 2025-05-19 14:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:15.915842 | orchestrator | 2025-05-19 14:47:15 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:15.916496 | orchestrator | 2025-05-19 14:47:15 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state STARTED 2025-05-19 14:47:15.917388 | orchestrator | 2025-05-19 14:47:15 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:15.917420 | orchestrator | 2025-05-19 14:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:18.983646 | orchestrator | 2025-05-19 14:47:18 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:18.987086 | orchestrator | 2025-05-19 14:47:18 | INFO  | Task dfff542f-e260-4a52-bdf3-ee6864abbe4e is in state SUCCESS 2025-05-19 14:47:18.989033 | orchestrator | 2025-05-19 14:47:18.989225 | orchestrator | 2025-05-19 14:47:18.989240 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-19 14:47:18.989276 | orchestrator | 2025-05-19 14:47:18.989634 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-19 14:47:18.989655 | orchestrator | Monday 19 May 2025 14:45:10 +0000 (0:00:00.672) 0:00:00.672 ************ 2025-05-19 14:47:18.989666 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:47:18.989678 | orchestrator | 2025-05-19 14:47:18.989689 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-19 14:47:18.989863 | orchestrator | Monday 19 May 2025 14:45:11 +0000 (0:00:00.570) 0:00:01.243 ************ 2025-05-19 14:47:18.990140 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990155 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990166 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990177 | orchestrator | 2025-05-19 14:47:18.990190 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-19 14:47:18.990201 | orchestrator | Monday 19 May 2025 14:45:12 +0000 (0:00:00.621) 0:00:01.865 ************ 2025-05-19 14:47:18.990212 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990222 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990233 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990244 | orchestrator | 2025-05-19 14:47:18.990254 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-19 14:47:18.990265 | orchestrator | Monday 19 May 2025 14:45:12 +0000 (0:00:00.258) 0:00:02.123 ************ 2025-05-19 14:47:18.990276 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990287 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990297 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990308 | orchestrator | 2025-05-19 14:47:18.990319 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-19 14:47:18.990329 | orchestrator | Monday 19 May 2025 14:45:13 +0000 (0:00:00.709) 0:00:02.833 ************ 2025-05-19 14:47:18.990340 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990350 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990361 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990371 | orchestrator | 2025-05-19 14:47:18.990382 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-19 14:47:18.990393 | orchestrator | Monday 19 May 2025 14:45:13 +0000 (0:00:00.285) 0:00:03.118 ************ 2025-05-19 14:47:18.990404 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990414 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990425 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990435 | orchestrator | 2025-05-19 14:47:18.990446 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-19 14:47:18.990457 | orchestrator | Monday 19 May 2025 14:45:13 +0000 (0:00:00.274) 0:00:03.392 ************ 2025-05-19 14:47:18.990467 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990478 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990489 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990499 | orchestrator | 2025-05-19 14:47:18.990510 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 14:47:18.990521 | orchestrator | Monday 19 May 2025 14:45:14 +0000 (0:00:00.334) 0:00:03.727 ************ 2025-05-19 14:47:18.990532 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.990653 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.990668 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.990679 | orchestrator | 2025-05-19 14:47:18.990690 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 14:47:18.990701 | orchestrator | Monday 19 May 2025 14:45:14 +0000 (0:00:00.461) 0:00:04.189 ************ 2025-05-19 14:47:18.990712 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990723 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990733 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990744 | orchestrator | 2025-05-19 14:47:18.990756 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 14:47:18.990783 | orchestrator | Monday 19 May 2025 14:45:14 +0000 (0:00:00.282) 0:00:04.472 ************ 2025-05-19 14:47:18.990796 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 14:47:18.990807 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:47:18.990819 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:47:18.990831 | orchestrator | 2025-05-19 14:47:18.990843 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-19 14:47:18.990855 | orchestrator | Monday 19 May 2025 14:45:15 +0000 (0:00:00.600) 0:00:05.072 ************ 2025-05-19 14:47:18.990867 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.990878 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.990891 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.990902 | orchestrator | 2025-05-19 14:47:18.990915 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-19 14:47:18.990927 | orchestrator | Monday 19 May 2025 14:45:15 +0000 (0:00:00.421) 0:00:05.494 ************ 2025-05-19 14:47:18.990938 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 14:47:18.990949 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:47:18.990959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:47:18.990969 | orchestrator | 2025-05-19 14:47:18.990979 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-19 14:47:18.990988 | orchestrator | Monday 19 May 2025 14:45:17 +0000 (0:00:02.042) 0:00:07.536 ************ 2025-05-19 14:47:18.990998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 14:47:18.991007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 14:47:18.991028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 14:47:18.991038 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991047 | orchestrator | 2025-05-19 14:47:18.991057 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-19 14:47:18.991110 | orchestrator | Monday 19 May 2025 14:45:18 +0000 (0:00:00.371) 0:00:07.907 ************ 2025-05-19 14:47:18.991124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991156 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991166 | orchestrator | 2025-05-19 14:47:18.991175 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-19 14:47:18.991184 | orchestrator | Monday 19 May 2025 14:45:18 +0000 (0:00:00.705) 0:00:08.613 ************ 2025-05-19 14:47:18.991195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.991234 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991243 | orchestrator | 2025-05-19 14:47:18.991253 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-19 14:47:18.991262 | orchestrator | Monday 19 May 2025 14:45:19 +0000 (0:00:00.152) 0:00:08.765 ************ 2025-05-19 14:47:18.991274 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a22936fb699a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 14:45:16.445911', 'end': '2025-05-19 14:45:16.498115', 'delta': '0:00:00.052204', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a22936fb699a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-19 14:47:18.991292 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '64cc773a9c53', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 14:45:17.159373', 'end': '2025-05-19 14:45:17.209772', 'delta': '0:00:00.050399', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['64cc773a9c53'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-19 14:47:18.991331 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8a4e8a83abfb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 14:45:17.669234', 'end': '2025-05-19 14:45:17.710291', 'delta': '0:00:00.041057', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a4e8a83abfb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-19 14:47:18.991343 | orchestrator | 2025-05-19 14:47:18.991352 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-19 14:47:18.991362 | orchestrator | Monday 19 May 2025 14:45:19 +0000 (0:00:00.370) 0:00:09.136 ************ 2025-05-19 14:47:18.991371 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.991381 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.991390 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.991399 | orchestrator | 2025-05-19 14:47:18.991409 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-19 14:47:18.991418 | orchestrator | Monday 19 May 2025 14:45:19 +0000 (0:00:00.419) 0:00:09.555 ************ 2025-05-19 14:47:18.991434 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-19 14:47:18.991444 | orchestrator | 2025-05-19 14:47:18.991453 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-19 14:47:18.991462 | orchestrator | Monday 19 May 2025 14:45:21 +0000 (0:00:01.600) 0:00:11.156 ************ 2025-05-19 14:47:18.991472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991481 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.991591 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.991676 | orchestrator | 2025-05-19 14:47:18.991688 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-19 14:47:18.991697 | orchestrator | Monday 19 May 2025 14:45:21 +0000 (0:00:00.262) 0:00:11.418 ************ 2025-05-19 14:47:18.991707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991716 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.991726 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.991735 | orchestrator | 2025-05-19 14:47:18.991745 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 14:47:18.991754 | orchestrator | Monday 19 May 2025 14:45:22 +0000 (0:00:00.379) 0:00:11.798 ************ 2025-05-19 14:47:18.991763 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991773 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.991782 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.991792 | orchestrator | 2025-05-19 14:47:18.991802 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-19 14:47:18.991811 | orchestrator | Monday 19 May 2025 14:45:22 +0000 (0:00:00.421) 0:00:12.219 ************ 2025-05-19 14:47:18.991821 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.991830 | orchestrator | 2025-05-19 14:47:18.991840 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-19 14:47:18.991849 | orchestrator | Monday 19 May 2025 14:45:22 +0000 (0:00:00.118) 0:00:12.338 ************ 2025-05-19 14:47:18.991859 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991868 | orchestrator | 2025-05-19 14:47:18.991878 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 14:47:18.991887 | orchestrator | Monday 19 May 2025 14:45:22 +0000 (0:00:00.204) 0:00:12.542 ************ 2025-05-19 14:47:18.991897 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991906 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.991915 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.991925 | orchestrator | 2025-05-19 14:47:18.991934 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-19 14:47:18.991944 | orchestrator | Monday 19 May 2025 14:45:23 +0000 (0:00:00.245) 0:00:12.788 ************ 2025-05-19 14:47:18.991953 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.991962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.991972 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.991981 | orchestrator | 2025-05-19 14:47:18.991991 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-19 14:47:18.992000 | orchestrator | Monday 19 May 2025 14:45:23 +0000 (0:00:00.279) 0:00:13.067 ************ 2025-05-19 14:47:18.992010 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992019 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.992029 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.992038 | orchestrator | 2025-05-19 14:47:18.992048 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-19 14:47:18.992057 | orchestrator | Monday 19 May 2025 14:45:23 +0000 (0:00:00.434) 0:00:13.502 ************ 2025-05-19 14:47:18.992066 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992076 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.992085 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.992094 | orchestrator | 2025-05-19 14:47:18.992104 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 14:47:18.992114 | orchestrator | Monday 19 May 2025 14:45:24 +0000 (0:00:00.287) 0:00:13.790 ************ 2025-05-19 14:47:18.992132 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992141 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.992151 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.992160 | orchestrator | 2025-05-19 14:47:18.992170 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-19 14:47:18.992179 | orchestrator | Monday 19 May 2025 14:45:24 +0000 (0:00:00.287) 0:00:14.078 ************ 2025-05-19 14:47:18.992194 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992204 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.992214 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.992223 | orchestrator | 2025-05-19 14:47:18.992232 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 14:47:18.992270 | orchestrator | Monday 19 May 2025 14:45:24 +0000 (0:00:00.301) 0:00:14.379 ************ 2025-05-19 14:47:18.992282 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992291 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.992300 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.992310 | orchestrator | 2025-05-19 14:47:18.992321 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-19 14:47:18.992333 | orchestrator | Monday 19 May 2025 14:45:25 +0000 (0:00:00.513) 0:00:14.892 ************ 2025-05-19 14:47:18.992352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f', 'dm-uuid-LVM-6XjVVGnIu5dfK03NqnV2FLRoxstuMusnG99v2bfLI3funxirDTVcA7D0I8z0Kks5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd', 'dm-uuid-LVM-s9yX6STbOcEYw0jykggC8wY1mdrtBgcLNGy1nnupdvMuFCX9Ez12c63i8zTG99hb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.992652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYQ1Ui-9zvQ-fjxX-66QV-fkvC-JTKz-e8FWrp', 'scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0', 'scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.992729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5', 'dm-uuid-LVM-SogVLv5AA1iwBc4y1xxdo7yUfHOfzqDLCfsjyHqaQVU5sFt0qrdjbqGcyvu8YH29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lscncc-A5cD-eljx-6h5C-Xk73-kXPo-y2jZjU', 'scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2', 'scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.992769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660', 'dm-uuid-LVM-r50SJW42xBIsxitZY0Vrid8wHWzvkHrTKt3Pg3cc1gIBl4KoEAalds8FVg26GTq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809', 'scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.992804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.992842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992948 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.992966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.992978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAvDnF-xl55-Dn60-gmP5-X2Ty-dkRp-hCEb4M', 'scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538', 'scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QeHnBy-RQtO-xZd0-LcD5-L29s-TGP5-g3wY4z', 'scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964', 'scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76', 'dm-uuid-LVM-6xlILYCsDgmXJUwznnA8gdmMneRu8jjdxjdLRJCHvX8zKbKkjGruy749r1Ul6j8k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a', 'scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24', 'dm-uuid-LVM-kyHMoxOUeHOOnPVhxZlIuw1obDjedo4W3Zd21TPzF1Lso8MAilmhfuIhJvlF2J2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993151 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.993169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 14:47:18.993273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K51oYj-rXRT-7pk7-S3cd-z0JP-s0Xf-jUtv0X', 'scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834', 'scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rB9Rm5-jHsC-jbcH-OYEr-kT22-vWtN-cRSTcD', 'scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738', 'scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb', 'scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 14:47:18.993342 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.993352 | orchestrator | 2025-05-19 14:47:18.993361 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 14:47:18.993371 | orchestrator | Monday 19 May 2025 14:45:25 +0000 (0:00:00.514) 0:00:15.406 ************ 2025-05-19 14:47:18.993382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f', 'dm-uuid-LVM-6XjVVGnIu5dfK03NqnV2FLRoxstuMusnG99v2bfLI3funxirDTVcA7D0I8z0Kks5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd', 'dm-uuid-LVM-s9yX6STbOcEYw0jykggC8wY1mdrtBgcLNGy1nnupdvMuFCX9Ez12c63i8zTG99hb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5', 'dm-uuid-LVM-SogVLv5AA1iwBc4y1xxdo7yUfHOfzqDLCfsjyHqaQVU5sFt0qrdjbqGcyvu8YH29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660', 'dm-uuid-LVM-r50SJW42xBIsxitZY0Vrid8wHWzvkHrTKt3Pg3cc1gIBl4KoEAalds8FVg26GTq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16', 'scsi-SQEMU_QEMU_HARDDISK_78133c64-849c-40c3-990a-e64897cf2484-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f79a0596--c901--5dda--8c3d--7673c0794e9f-osd--block--f79a0596--c901--5dda--8c3d--7673c0794e9f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yYQ1Ui-9zvQ-fjxX-66QV-fkvC-JTKz-e8FWrp', 'scsi-0QEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0', 'scsi-SQEMU_QEMU_HARDDISK_680c5e0d-c4c7-4132-acba-9735c42c1af0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--be132d09--93e5--58e2--99ec--48d3b83dc2dd-osd--block--be132d09--93e5--58e2--99ec--48d3b83dc2dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lscncc-A5cD-eljx-6h5C-Xk73-kXPo-y2jZjU', 'scsi-0QEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2', 'scsi-SQEMU_QEMU_HARDDISK_b41d3d7b-c0e8-42b0-b403-509e3ccc1be2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809', 'scsi-SQEMU_QEMU_HARDDISK_b9a454d9-5190-46d7-bf1d-412c3cdef809'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993779 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.993798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993825 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76', 'dm-uuid-LVM-6xlILYCsDgmXJUwznnA8gdmMneRu8jjdxjdLRJCHvX8zKbKkjGruy749r1Ul6j8k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993878 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a851c84-3902-4186-83ed-138a79cd637e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24', 'dm-uuid-LVM-kyHMoxOUeHOOnPVhxZlIuw1obDjedo4W3Zd21TPzF1Lso8MAilmhfuIhJvlF2J2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--14b77220--8a02--5c14--b369--aaa75d02e7a5-osd--block--14b77220--8a02--5c14--b369--aaa75d02e7a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAvDnF-xl55-Dn60-gmP5-X2Ty-dkRp-hCEb4M', 'scsi-0QEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538', 'scsi-SQEMU_QEMU_HARDDISK_5a001b31-cf12-4664-aa8d-ed0bc0514538'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d28da045--49d6--58b1--95f0--26301c413660-osd--block--d28da045--49d6--58b1--95f0--26301c413660'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QeHnBy-RQtO-xZd0-LcD5-L29s-TGP5-g3wY4z', 'scsi-0QEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964', 'scsi-SQEMU_QEMU_HARDDISK_d1a8e6bf-71cd-4139-b86c-b09c993f7964'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a', 'scsi-SQEMU_QEMU_HARDDISK_2a99b222-9040-43f8-85f0-4cedeb957b6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.993996 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.994051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8da0273e-10d5-4ffc-9c46-b04f159e35a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--18cd8a80--96d5--5946--80eb--7d63885b2b76-osd--block--18cd8a80--96d5--5946--80eb--7d63885b2b76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K51oYj-rXRT-7pk7-S3cd-z0JP-s0Xf-jUtv0X', 'scsi-0QEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834', 'scsi-SQEMU_QEMU_HARDDISK_a9ef89b3-6e14-4065-9e7c-f9800ecdb834'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ad566f4e--67fb--565a--8346--037c8100dc24-osd--block--ad566f4e--67fb--565a--8346--037c8100dc24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rB9Rm5-jHsC-jbcH-OYEr-kT22-vWtN-cRSTcD', 'scsi-0QEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738', 'scsi-SQEMU_QEMU_HARDDISK_ce4b2895-5caf-48e7-8bbd-df151d11c738'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb', 'scsi-SQEMU_QEMU_HARDDISK_b351c90f-8a81-4fe1-9713-dc72db3449cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-13-49-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 14:47:18.994204 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.994214 | orchestrator | 2025-05-19 14:47:18.994224 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-19 14:47:18.994234 | orchestrator | Monday 19 May 2025 14:45:26 +0000 (0:00:00.562) 0:00:15.969 ************ 2025-05-19 14:47:18.994244 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.994253 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.994263 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.994272 | orchestrator | 2025-05-19 14:47:18.994281 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-19 14:47:18.994291 | orchestrator | Monday 19 May 2025 14:45:26 +0000 (0:00:00.625) 0:00:16.594 ************ 2025-05-19 14:47:18.994301 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.994310 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.994320 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.994329 | orchestrator | 2025-05-19 14:47:18.994339 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 14:47:18.994348 | orchestrator | Monday 19 May 2025 14:45:27 +0000 (0:00:00.454) 0:00:17.049 ************ 2025-05-19 14:47:18.994358 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.994367 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.994377 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.994386 | orchestrator | 2025-05-19 14:47:18.994396 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 14:47:18.994405 | orchestrator | Monday 19 May 2025 14:45:27 +0000 (0:00:00.596) 0:00:17.645 ************ 2025-05-19 14:47:18.994415 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.994424 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.994434 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.994443 | orchestrator | 2025-05-19 14:47:18.994453 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 14:47:18.994462 | orchestrator | Monday 19 May 2025 14:45:28 +0000 (0:00:00.283) 0:00:17.929 ************ 2025-05-19 14:47:18.994472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.994481 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.994491 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.994500 | orchestrator | 2025-05-19 14:47:18.994510 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 14:47:18.994519 | orchestrator | Monday 19 May 2025 14:45:28 +0000 (0:00:00.370) 0:00:18.299 ************ 2025-05-19 14:47:18.994528 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.994538 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.994547 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.994591 | orchestrator | 2025-05-19 14:47:18.994602 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-19 14:47:18.994612 | orchestrator | Monday 19 May 2025 14:45:29 +0000 (0:00:00.446) 0:00:18.745 ************ 2025-05-19 14:47:18.994621 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 14:47:18.994631 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 14:47:18.994641 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 14:47:18.994651 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 14:47:18.994660 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 14:47:18.994669 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 14:47:18.994679 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 14:47:18.994688 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 14:47:18.994698 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 14:47:18.994707 | orchestrator | 2025-05-19 14:47:18.994717 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-19 14:47:18.994726 | orchestrator | Monday 19 May 2025 14:45:29 +0000 (0:00:00.767) 0:00:19.512 ************ 2025-05-19 14:47:18.994743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 14:47:18.994753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 14:47:18.994762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 14:47:18.994772 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.994787 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 14:47:18.994805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 14:47:18.994823 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 14:47:18.994841 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.994858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 14:47:18.994870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 14:47:18.994879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 14:47:18.994888 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.994898 | orchestrator | 2025-05-19 14:47:18.994907 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-19 14:47:18.994918 | orchestrator | Monday 19 May 2025 14:45:30 +0000 (0:00:00.302) 0:00:19.815 ************ 2025-05-19 14:47:18.994934 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:47:18.994951 | orchestrator | 2025-05-19 14:47:18.994974 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 14:47:18.994992 | orchestrator | Monday 19 May 2025 14:45:30 +0000 (0:00:00.664) 0:00:20.479 ************ 2025-05-19 14:47:18.995009 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995025 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.995038 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.995048 | orchestrator | 2025-05-19 14:47:18.995064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 14:47:18.995074 | orchestrator | Monday 19 May 2025 14:45:31 +0000 (0:00:00.292) 0:00:20.772 ************ 2025-05-19 14:47:18.995083 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995092 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.995102 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.995111 | orchestrator | 2025-05-19 14:47:18.995120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 14:47:18.995130 | orchestrator | Monday 19 May 2025 14:45:31 +0000 (0:00:00.261) 0:00:21.034 ************ 2025-05-19 14:47:18.995139 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995149 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.995158 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:47:18.995167 | orchestrator | 2025-05-19 14:47:18.995177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 14:47:18.995186 | orchestrator | Monday 19 May 2025 14:45:31 +0000 (0:00:00.283) 0:00:21.318 ************ 2025-05-19 14:47:18.995195 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.995205 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.995214 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.995223 | orchestrator | 2025-05-19 14:47:18.995233 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 14:47:18.995242 | orchestrator | Monday 19 May 2025 14:45:32 +0000 (0:00:00.540) 0:00:21.858 ************ 2025-05-19 14:47:18.995251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:47:18.995261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:47:18.995275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:47:18.995291 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995308 | orchestrator | 2025-05-19 14:47:18.995324 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 14:47:18.995335 | orchestrator | Monday 19 May 2025 14:45:32 +0000 (0:00:00.336) 0:00:22.195 ************ 2025-05-19 14:47:18.995355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:47:18.995364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:47:18.995374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:47:18.995383 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995392 | orchestrator | 2025-05-19 14:47:18.995402 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 14:47:18.995411 | orchestrator | Monday 19 May 2025 14:45:32 +0000 (0:00:00.340) 0:00:22.535 ************ 2025-05-19 14:47:18.995420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 14:47:18.995430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 14:47:18.995439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 14:47:18.995449 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995458 | orchestrator | 2025-05-19 14:47:18.995468 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 14:47:18.995477 | orchestrator | Monday 19 May 2025 14:45:33 +0000 (0:00:00.346) 0:00:22.882 ************ 2025-05-19 14:47:18.995487 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:47:18.995496 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:47:18.995505 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:47:18.995515 | orchestrator | 2025-05-19 14:47:18.995524 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 14:47:18.995534 | orchestrator | Monday 19 May 2025 14:45:33 +0000 (0:00:00.281) 0:00:23.164 ************ 2025-05-19 14:47:18.995543 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 14:47:18.995572 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 14:47:18.995585 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 14:47:18.995595 | orchestrator | 2025-05-19 14:47:18.995604 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-19 14:47:18.995614 | orchestrator | Monday 19 May 2025 14:45:33 +0000 (0:00:00.463) 0:00:23.627 ************ 2025-05-19 14:47:18.995623 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 14:47:18.995633 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:47:18.995642 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:47:18.995652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 14:47:18.995661 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 14:47:18.995670 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 14:47:18.995680 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 14:47:18.995689 | orchestrator | 2025-05-19 14:47:18.995699 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-19 14:47:18.995708 | orchestrator | Monday 19 May 2025 14:45:34 +0000 (0:00:00.918) 0:00:24.545 ************ 2025-05-19 14:47:18.995717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 14:47:18.995727 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 14:47:18.995736 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 14:47:18.995745 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 14:47:18.995755 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 14:47:18.995769 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 14:47:18.995779 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 14:47:18.995788 | orchestrator | 2025-05-19 14:47:18.995803 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-19 14:47:18.995819 | orchestrator | Monday 19 May 2025 14:45:36 +0000 (0:00:01.866) 0:00:26.412 ************ 2025-05-19 14:47:18.995829 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:47:18.995838 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:47:18.995848 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-19 14:47:18.995857 | orchestrator | 2025-05-19 14:47:18.995867 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-19 14:47:18.995876 | orchestrator | Monday 19 May 2025 14:45:37 +0000 (0:00:00.352) 0:00:26.764 ************ 2025-05-19 14:47:18.995886 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:47:18.995897 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:47:18.995907 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:47:18.995917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:47:18.995927 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 14:47:18.995936 | orchestrator | 2025-05-19 14:47:18.995946 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-19 14:47:18.995956 | orchestrator | Monday 19 May 2025 14:46:22 +0000 (0:00:45.844) 0:01:12.608 ************ 2025-05-19 14:47:18.995965 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.995975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.995984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.995993 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996003 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996022 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-19 14:47:18.996031 | orchestrator | 2025-05-19 14:47:18.996041 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-19 14:47:18.996050 | orchestrator | Monday 19 May 2025 14:46:46 +0000 (0:00:23.824) 0:01:36.433 ************ 2025-05-19 14:47:18.996060 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996069 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996079 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996088 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996097 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996113 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996123 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 14:47:18.996132 | orchestrator | 2025-05-19 14:47:18.996142 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-19 14:47:18.996151 | orchestrator | Monday 19 May 2025 14:46:58 +0000 (0:00:12.091) 0:01:48.524 ************ 2025-05-19 14:47:18.996161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996170 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996179 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996204 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996228 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996238 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996248 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996266 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996276 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996285 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996295 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996304 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996314 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 14:47:18.996323 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 14:47:18.996333 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 14:47:18.996343 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-19 14:47:18.996352 | orchestrator | 2025-05-19 14:47:18.996361 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:47:18.996371 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-19 14:47:18.996382 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-19 14:47:18.996392 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 14:47:18.996401 | orchestrator | 2025-05-19 14:47:18.996411 | orchestrator | 2025-05-19 14:47:18.996420 | orchestrator | 2025-05-19 14:47:18.996430 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:47:18.996439 | orchestrator | Monday 19 May 2025 14:47:15 +0000 (0:00:17.016) 0:02:05.541 ************ 2025-05-19 14:47:18.996449 | orchestrator | =============================================================================== 2025-05-19 14:47:18.996458 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.84s 2025-05-19 14:47:18.996468 | orchestrator | generate keys ---------------------------------------------------------- 23.82s 2025-05-19 14:47:18.996477 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.02s 2025-05-19 14:47:18.996486 | orchestrator | get keys from monitors ------------------------------------------------- 12.09s 2025-05-19 14:47:18.996502 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.04s 2025-05-19 14:47:18.996511 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.87s 2025-05-19 14:47:18.996521 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.60s 2025-05-19 14:47:18.996530 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-05-19 14:47:18.996540 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.77s 2025-05-19 14:47:18.996549 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.71s 2025-05-19 14:47:18.996608 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.71s 2025-05-19 14:47:18.996619 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.66s 2025-05-19 14:47:18.996628 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.63s 2025-05-19 14:47:18.996637 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-05-19 14:47:18.996647 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-05-19 14:47:18.996656 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2025-05-19 14:47:18.996665 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.57s 2025-05-19 14:47:18.996675 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2025-05-19 14:47:18.996684 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.54s 2025-05-19 14:47:18.996693 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2025-05-19 14:47:18.996703 | orchestrator | 2025-05-19 14:47:18 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:18.996718 | orchestrator | 2025-05-19 14:47:18 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:18.996735 | orchestrator | 2025-05-19 14:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:22.052305 | orchestrator | 2025-05-19 14:47:22 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:22.052412 | orchestrator | 2025-05-19 14:47:22 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:22.052429 | orchestrator | 2025-05-19 14:47:22 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:22.052441 | orchestrator | 2025-05-19 14:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:25.109481 | orchestrator | 2025-05-19 14:47:25 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:25.111115 | orchestrator | 2025-05-19 14:47:25 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:25.114308 | orchestrator | 2025-05-19 14:47:25 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:25.114629 | orchestrator | 2025-05-19 14:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:28.162992 | orchestrator | 2025-05-19 14:47:28 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:28.164984 | orchestrator | 2025-05-19 14:47:28 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:28.167987 | orchestrator | 2025-05-19 14:47:28 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:28.168033 | orchestrator | 2025-05-19 14:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:31.225355 | orchestrator | 2025-05-19 14:47:31 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:31.227330 | orchestrator | 2025-05-19 14:47:31 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:31.228973 | orchestrator | 2025-05-19 14:47:31 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:31.229220 | orchestrator | 2025-05-19 14:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:34.283216 | orchestrator | 2025-05-19 14:47:34 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:34.285565 | orchestrator | 2025-05-19 14:47:34 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:34.288007 | orchestrator | 2025-05-19 14:47:34 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:34.288043 | orchestrator | 2025-05-19 14:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:37.347315 | orchestrator | 2025-05-19 14:47:37 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:37.349136 | orchestrator | 2025-05-19 14:47:37 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:37.351162 | orchestrator | 2025-05-19 14:47:37 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:37.351209 | orchestrator | 2025-05-19 14:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:40.403569 | orchestrator | 2025-05-19 14:47:40 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:40.403763 | orchestrator | 2025-05-19 14:47:40 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:40.404796 | orchestrator | 2025-05-19 14:47:40 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:40.404844 | orchestrator | 2025-05-19 14:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:43.456052 | orchestrator | 2025-05-19 14:47:43 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:43.457795 | orchestrator | 2025-05-19 14:47:43 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state STARTED 2025-05-19 14:47:43.460947 | orchestrator | 2025-05-19 14:47:43 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:43.461024 | orchestrator | 2025-05-19 14:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:46.542113 | orchestrator | 2025-05-19 14:47:46 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:46.542389 | orchestrator | 2025-05-19 14:47:46 | INFO  | Task d7da71d4-c4a7-4a85-95b7-1cebcebc4a23 is in state SUCCESS 2025-05-19 14:47:46.545788 | orchestrator | 2025-05-19 14:47:46 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:47:46.550196 | orchestrator | 2025-05-19 14:47:46 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:46.550248 | orchestrator | 2025-05-19 14:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:49.599952 | orchestrator | 2025-05-19 14:47:49 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:49.602837 | orchestrator | 2025-05-19 14:47:49 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:47:49.604520 | orchestrator | 2025-05-19 14:47:49 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:49.604848 | orchestrator | 2025-05-19 14:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:52.649286 | orchestrator | 2025-05-19 14:47:52 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:52.652155 | orchestrator | 2025-05-19 14:47:52 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:47:52.653971 | orchestrator | 2025-05-19 14:47:52 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state STARTED 2025-05-19 14:47:52.654375 | orchestrator | 2025-05-19 14:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:55.708952 | orchestrator | 2025-05-19 14:47:55 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:55.710894 | orchestrator | 2025-05-19 14:47:55 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:47:55.712495 | orchestrator | 2025-05-19 14:47:55 | INFO  | Task 82955293-3fe4-47f3-b7bd-c81af2bb23ac is in state SUCCESS 2025-05-19 14:47:55.712685 | orchestrator | 2025-05-19 14:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:47:55.714532 | orchestrator | 2025-05-19 14:47:55.714561 | orchestrator | 2025-05-19 14:47:55.714570 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-19 14:47:55.714578 | orchestrator | 2025-05-19 14:47:55.714586 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-19 14:47:55.714595 | orchestrator | Monday 19 May 2025 14:47:20 +0000 (0:00:00.150) 0:00:00.150 ************ 2025-05-19 14:47:55.714603 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-19 14:47:55.714636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714645 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714653 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:47:55.714661 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714669 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-19 14:47:55.714677 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-19 14:47:55.714685 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-19 14:47:55.714693 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-19 14:47:55.714700 | orchestrator | 2025-05-19 14:47:55.714708 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-19 14:47:55.714718 | orchestrator | Monday 19 May 2025 14:47:24 +0000 (0:00:04.003) 0:00:04.153 ************ 2025-05-19 14:47:55.714732 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 14:47:55.714741 | orchestrator | 2025-05-19 14:47:55.714752 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-19 14:47:55.714765 | orchestrator | Monday 19 May 2025 14:47:25 +0000 (0:00:00.907) 0:00:05.061 ************ 2025-05-19 14:47:55.714776 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-19 14:47:55.714789 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714803 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714811 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:47:55.714819 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714827 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-19 14:47:55.714835 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-19 14:47:55.714842 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-19 14:47:55.714871 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-19 14:47:55.714879 | orchestrator | 2025-05-19 14:47:55.714887 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-19 14:47:55.714895 | orchestrator | Monday 19 May 2025 14:47:37 +0000 (0:00:12.434) 0:00:17.496 ************ 2025-05-19 14:47:55.714903 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-19 14:47:55.714911 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714936 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714947 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:47:55.714959 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 14:47:55.714968 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-19 14:47:55.714978 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-19 14:47:55.714990 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-19 14:47:55.714998 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-19 14:47:55.715006 | orchestrator | 2025-05-19 14:47:55.715014 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:47:55.715022 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:47:55.715031 | orchestrator | 2025-05-19 14:47:55.715039 | orchestrator | 2025-05-19 14:47:55.715047 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:47:55.715055 | orchestrator | Monday 19 May 2025 14:47:43 +0000 (0:00:06.310) 0:00:23.806 ************ 2025-05-19 14:47:55.715062 | orchestrator | =============================================================================== 2025-05-19 14:47:55.715070 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.43s 2025-05-19 14:47:55.715078 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.31s 2025-05-19 14:47:55.715086 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.00s 2025-05-19 14:47:55.715093 | orchestrator | Create share directory -------------------------------------------------- 0.91s 2025-05-19 14:47:55.715101 | orchestrator | 2025-05-19 14:47:55.715109 | orchestrator | 2025-05-19 14:47:55.715117 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:47:55.715124 | orchestrator | 2025-05-19 14:47:55.715144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:47:55.715157 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.250) 0:00:00.250 ************ 2025-05-19 14:47:55.715171 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.715184 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.715197 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.715209 | orchestrator | 2025-05-19 14:47:55.715222 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:47:55.715234 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.258) 0:00:00.508 ************ 2025-05-19 14:47:55.715247 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-19 14:47:55.715260 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-19 14:47:55.715272 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-19 14:47:55.715283 | orchestrator | 2025-05-19 14:47:55.715296 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-19 14:47:55.715308 | orchestrator | 2025-05-19 14:47:55.715319 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 14:47:55.715332 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.375) 0:00:00.884 ************ 2025-05-19 14:47:55.715344 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:47:55.715366 | orchestrator | 2025-05-19 14:47:55.715378 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-19 14:47:55.715391 | orchestrator | Monday 19 May 2025 14:46:12 +0000 (0:00:00.468) 0:00:01.353 ************ 2025-05-19 14:47:55.715420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.715456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.715480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.715489 | orchestrator | 2025-05-19 14:47:55.715497 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-19 14:47:55.715505 | orchestrator | Monday 19 May 2025 14:46:13 +0000 (0:00:01.124) 0:00:02.477 ************ 2025-05-19 14:47:55.715513 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.715521 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.715529 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.715536 | orchestrator | 2025-05-19 14:47:55.715544 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 14:47:55.715552 | orchestrator | Monday 19 May 2025 14:46:13 +0000 (0:00:00.400) 0:00:02.878 ************ 2025-05-19 14:47:55.715560 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 14:47:55.715568 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 14:47:55.715581 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 14:47:55.715590 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 14:47:55.715604 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 14:47:55.715780 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 14:47:55.715932 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-19 14:47:55.715947 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 14:47:55.715958 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 14:47:55.715969 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 14:47:55.715980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 14:47:55.715990 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 14:47:55.716001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 14:47:55.716011 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 14:47:55.716022 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-19 14:47:55.716032 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 14:47:55.716043 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 14:47:55.716053 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 14:47:55.716063 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 14:47:55.716074 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 14:47:55.716084 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 14:47:55.716095 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 14:47:55.716105 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-19 14:47:55.716116 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 14:47:55.716128 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-19 14:47:55.716140 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-19 14:47:55.716151 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-19 14:47:55.716162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-19 14:47:55.716172 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-19 14:47:55.716197 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-19 14:47:55.716208 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-19 14:47:55.716219 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-19 14:47:55.716230 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-19 14:47:55.716242 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-19 14:47:55.716252 | orchestrator | 2025-05-19 14:47:55.716271 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.716283 | orchestrator | Monday 19 May 2025 14:46:14 +0000 (0:00:00.658) 0:00:03.536 ************ 2025-05-19 14:47:55.716293 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.716305 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.716315 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.716326 | orchestrator | 2025-05-19 14:47:55.716336 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.716347 | orchestrator | Monday 19 May 2025 14:46:14 +0000 (0:00:00.275) 0:00:03.812 ************ 2025-05-19 14:47:55.716358 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716369 | orchestrator | 2025-05-19 14:47:55.716380 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.716419 | orchestrator | Monday 19 May 2025 14:46:14 +0000 (0:00:00.111) 0:00:03.923 ************ 2025-05-19 14:47:55.716431 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716442 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.716453 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.716463 | orchestrator | 2025-05-19 14:47:55.716474 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.716485 | orchestrator | Monday 19 May 2025 14:46:15 +0000 (0:00:00.417) 0:00:04.340 ************ 2025-05-19 14:47:55.716496 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.716506 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.716517 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.716527 | orchestrator | 2025-05-19 14:47:55.716538 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.716549 | orchestrator | Monday 19 May 2025 14:46:15 +0000 (0:00:00.281) 0:00:04.622 ************ 2025-05-19 14:47:55.716560 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716570 | orchestrator | 2025-05-19 14:47:55.716580 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.716591 | orchestrator | Monday 19 May 2025 14:46:15 +0000 (0:00:00.127) 0:00:04.749 ************ 2025-05-19 14:47:55.716602 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716612 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.716650 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.716661 | orchestrator | 2025-05-19 14:47:55.716672 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.716683 | orchestrator | Monday 19 May 2025 14:46:16 +0000 (0:00:00.283) 0:00:05.033 ************ 2025-05-19 14:47:55.716694 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.716705 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.716715 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.716726 | orchestrator | 2025-05-19 14:47:55.716737 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.716747 | orchestrator | Monday 19 May 2025 14:46:16 +0000 (0:00:00.268) 0:00:05.302 ************ 2025-05-19 14:47:55.716758 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716768 | orchestrator | 2025-05-19 14:47:55.716779 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.716790 | orchestrator | Monday 19 May 2025 14:46:16 +0000 (0:00:00.307) 0:00:05.609 ************ 2025-05-19 14:47:55.716801 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716812 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.716822 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.716833 | orchestrator | 2025-05-19 14:47:55.716843 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.716854 | orchestrator | Monday 19 May 2025 14:46:16 +0000 (0:00:00.308) 0:00:05.918 ************ 2025-05-19 14:47:55.716865 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.716876 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.716886 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.716897 | orchestrator | 2025-05-19 14:47:55.716907 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.716925 | orchestrator | Monday 19 May 2025 14:46:17 +0000 (0:00:00.307) 0:00:06.226 ************ 2025-05-19 14:47:55.716936 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716947 | orchestrator | 2025-05-19 14:47:55.716957 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.716968 | orchestrator | Monday 19 May 2025 14:46:17 +0000 (0:00:00.113) 0:00:06.340 ************ 2025-05-19 14:47:55.716979 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.716989 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.717000 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.717011 | orchestrator | 2025-05-19 14:47:55.717022 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.717032 | orchestrator | Monday 19 May 2025 14:46:17 +0000 (0:00:00.272) 0:00:06.612 ************ 2025-05-19 14:47:55.717043 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.717053 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.717064 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.717074 | orchestrator | 2025-05-19 14:47:55.717085 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.717101 | orchestrator | Monday 19 May 2025 14:46:18 +0000 (0:00:00.462) 0:00:07.075 ************ 2025-05-19 14:47:55.717112 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717123 | orchestrator | 2025-05-19 14:47:55.717133 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.717144 | orchestrator | Monday 19 May 2025 14:46:18 +0000 (0:00:00.145) 0:00:07.221 ************ 2025-05-19 14:47:55.717155 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717165 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.717176 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.717187 | orchestrator | 2025-05-19 14:47:55.717197 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.717208 | orchestrator | Monday 19 May 2025 14:46:18 +0000 (0:00:00.294) 0:00:07.515 ************ 2025-05-19 14:47:55.717219 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.717230 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.717240 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.717251 | orchestrator | 2025-05-19 14:47:55.717261 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.717272 | orchestrator | Monday 19 May 2025 14:46:18 +0000 (0:00:00.300) 0:00:07.815 ************ 2025-05-19 14:47:55.717282 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717293 | orchestrator | 2025-05-19 14:47:55.717303 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.717314 | orchestrator | Monday 19 May 2025 14:46:18 +0000 (0:00:00.113) 0:00:07.928 ************ 2025-05-19 14:47:55.717325 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717335 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.717346 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.717357 | orchestrator | 2025-05-19 14:47:55.717367 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.717378 | orchestrator | Monday 19 May 2025 14:46:19 +0000 (0:00:00.452) 0:00:08.381 ************ 2025-05-19 14:47:55.717389 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.717399 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.717410 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.717420 | orchestrator | 2025-05-19 14:47:55.717438 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.717450 | orchestrator | Monday 19 May 2025 14:46:19 +0000 (0:00:00.313) 0:00:08.694 ************ 2025-05-19 14:47:55.717460 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717471 | orchestrator | 2025-05-19 14:47:55.717481 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.717492 | orchestrator | Monday 19 May 2025 14:46:19 +0000 (0:00:00.123) 0:00:08.818 ************ 2025-05-19 14:47:55.717503 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717519 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.717530 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.717540 | orchestrator | 2025-05-19 14:47:55.717551 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.717562 | orchestrator | Monday 19 May 2025 14:46:20 +0000 (0:00:00.348) 0:00:09.167 ************ 2025-05-19 14:47:55.717572 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.717668 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.717681 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.717692 | orchestrator | 2025-05-19 14:47:55.717702 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.717713 | orchestrator | Monday 19 May 2025 14:46:20 +0000 (0:00:00.311) 0:00:09.478 ************ 2025-05-19 14:47:55.717723 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717734 | orchestrator | 2025-05-19 14:47:55.717744 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.717755 | orchestrator | Monday 19 May 2025 14:46:20 +0000 (0:00:00.111) 0:00:09.589 ************ 2025-05-19 14:47:55.717765 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717776 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.717786 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.717797 | orchestrator | 2025-05-19 14:47:55.717807 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.717818 | orchestrator | Monday 19 May 2025 14:46:21 +0000 (0:00:00.457) 0:00:10.047 ************ 2025-05-19 14:47:55.717828 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.717839 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.717850 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.717861 | orchestrator | 2025-05-19 14:47:55.717872 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.717882 | orchestrator | Monday 19 May 2025 14:46:21 +0000 (0:00:00.295) 0:00:10.342 ************ 2025-05-19 14:47:55.717893 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717905 | orchestrator | 2025-05-19 14:47:55.717925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.717943 | orchestrator | Monday 19 May 2025 14:46:21 +0000 (0:00:00.114) 0:00:10.457 ************ 2025-05-19 14:47:55.717960 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.717978 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.718006 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.718103 | orchestrator | 2025-05-19 14:47:55.718125 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 14:47:55.718141 | orchestrator | Monday 19 May 2025 14:46:21 +0000 (0:00:00.260) 0:00:10.718 ************ 2025-05-19 14:47:55.718152 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:47:55.718162 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:47:55.718173 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:47:55.718183 | orchestrator | 2025-05-19 14:47:55.718194 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 14:47:55.718205 | orchestrator | Monday 19 May 2025 14:46:22 +0000 (0:00:00.472) 0:00:11.190 ************ 2025-05-19 14:47:55.718215 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.718226 | orchestrator | 2025-05-19 14:47:55.718237 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 14:47:55.718247 | orchestrator | Monday 19 May 2025 14:46:22 +0000 (0:00:00.113) 0:00:11.303 ************ 2025-05-19 14:47:55.718257 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.718268 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.718279 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.718289 | orchestrator | 2025-05-19 14:47:55.718307 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-19 14:47:55.718318 | orchestrator | Monday 19 May 2025 14:46:22 +0000 (0:00:00.270) 0:00:11.574 ************ 2025-05-19 14:47:55.718329 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:47:55.718350 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:47:55.718360 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:47:55.718371 | orchestrator | 2025-05-19 14:47:55.718382 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-19 14:47:55.718392 | orchestrator | Monday 19 May 2025 14:46:24 +0000 (0:00:01.499) 0:00:13.073 ************ 2025-05-19 14:47:55.718403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 14:47:55.718413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 14:47:55.718424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 14:47:55.718434 | orchestrator | 2025-05-19 14:47:55.718445 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-19 14:47:55.718456 | orchestrator | Monday 19 May 2025 14:46:25 +0000 (0:00:01.743) 0:00:14.816 ************ 2025-05-19 14:47:55.718466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 14:47:55.718478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 14:47:55.718488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 14:47:55.718499 | orchestrator | 2025-05-19 14:47:55.718509 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-19 14:47:55.718520 | orchestrator | Monday 19 May 2025 14:46:28 +0000 (0:00:02.403) 0:00:17.220 ************ 2025-05-19 14:47:55.718542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 14:47:55.718554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 14:47:55.718564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 14:47:55.718575 | orchestrator | 2025-05-19 14:47:55.718585 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-19 14:47:55.718596 | orchestrator | Monday 19 May 2025 14:46:29 +0000 (0:00:01.501) 0:00:18.722 ************ 2025-05-19 14:47:55.718606 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.718670 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.718683 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.718694 | orchestrator | 2025-05-19 14:47:55.718705 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-19 14:47:55.718715 | orchestrator | Monday 19 May 2025 14:46:29 +0000 (0:00:00.263) 0:00:18.986 ************ 2025-05-19 14:47:55.718726 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.718737 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.718747 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.718758 | orchestrator | 2025-05-19 14:47:55.718769 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 14:47:55.718779 | orchestrator | Monday 19 May 2025 14:46:30 +0000 (0:00:00.268) 0:00:19.254 ************ 2025-05-19 14:47:55.718790 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:47:55.718801 | orchestrator | 2025-05-19 14:47:55.718812 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-19 14:47:55.718822 | orchestrator | Monday 19 May 2025 14:46:31 +0000 (0:00:00.762) 0:00:20.016 ************ 2025-05-19 14:47:55.718854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.718891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.718912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.718939 | orchestrator | 2025-05-19 14:47:55.718960 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-19 14:47:55.718978 | orchestrator | Monday 19 May 2025 14:46:32 +0000 (0:00:01.392) 0:00:21.409 ************ 2025-05-19 14:47:55.719012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719111 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.719136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.719169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719187 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.719198 | orchestrator | 2025-05-19 14:47:55.719209 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-19 14:47:55.719220 | orchestrator | Monday 19 May 2025 14:46:32 +0000 (0:00:00.565) 0:00:21.975 ************ 2025-05-19 14:47:55.719245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719258 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.719270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719288 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.719314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 14:47:55.719327 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.719337 | orchestrator | 2025-05-19 14:47:55.719348 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-19 14:47:55.719359 | orchestrator | Monday 19 May 2025 14:46:33 +0000 (0:00:00.993) 0:00:22.968 ************ 2025-05-19 14:47:55.719377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.719404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.719430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 14:47:55.719442 | orchestrator | 2025-05-19 14:47:55.719453 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 14:47:55.719464 | orchestrator | Monday 19 May 2025 14:46:35 +0000 (0:00:01.123) 0:00:24.091 ************ 2025-05-19 14:47:55.719475 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:47:55.719485 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:47:55.719497 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:47:55.719508 | orchestrator | 2025-05-19 14:47:55.719518 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 14:47:55.719529 | orchestrator | Monday 19 May 2025 14:46:35 +0000 (0:00:00.349) 0:00:24.441 ************ 2025-05-19 14:47:55.719540 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:47:55.719550 | orchestrator | 2025-05-19 14:47:55.719561 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-19 14:47:55.719572 | orchestrator | Monday 19 May 2025 14:46:36 +0000 (0:00:00.679) 0:00:25.120 ************ 2025-05-19 14:47:55.719582 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:47:55.719593 | orchestrator | 2025-05-19 14:47:55.719610 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-19 14:47:55.719650 | orchestrator | Monday 19 May 2025 14:46:38 +0000 (0:00:02.131) 0:00:27.251 ************ 2025-05-19 14:47:55.719662 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:47:55.719674 | orchestrator | 2025-05-19 14:47:55.719692 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-19 14:47:55.719711 | orchestrator | Monday 19 May 2025 14:46:40 +0000 (0:00:02.001) 0:00:29.252 ************ 2025-05-19 14:47:55.719730 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:47:55.719759 | orchestrator | 2025-05-19 14:47:55.719778 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 14:47:55.719797 | orchestrator | Monday 19 May 2025 14:46:55 +0000 (0:00:14.905) 0:00:44.158 ************ 2025-05-19 14:47:55.719815 | orchestrator | 2025-05-19 14:47:55.719833 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 14:47:55.719845 | orchestrator | Monday 19 May 2025 14:46:55 +0000 (0:00:00.063) 0:00:44.222 ************ 2025-05-19 14:47:55.719856 | orchestrator | 2025-05-19 14:47:55.719866 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 14:47:55.719877 | orchestrator | Monday 19 May 2025 14:46:55 +0000 (0:00:00.061) 0:00:44.283 ************ 2025-05-19 14:47:55.719887 | orchestrator | 2025-05-19 14:47:55.719898 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-19 14:47:55.719909 | orchestrator | Monday 19 May 2025 14:46:55 +0000 (0:00:00.063) 0:00:44.347 ************ 2025-05-19 14:47:55.719919 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:47:55.719930 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:47:55.719940 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:47:55.719951 | orchestrator | 2025-05-19 14:47:55.719961 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:47:55.719972 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-19 14:47:55.719983 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-19 14:47:55.719994 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-19 14:47:55.720005 | orchestrator | 2025-05-19 14:47:55.720015 | orchestrator | 2025-05-19 14:47:55.720029 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:47:55.720047 | orchestrator | Monday 19 May 2025 14:47:53 +0000 (0:00:58.492) 0:01:42.839 ************ 2025-05-19 14:47:55.720065 | orchestrator | =============================================================================== 2025-05-19 14:47:55.720089 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.49s 2025-05-19 14:47:55.720113 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.91s 2025-05-19 14:47:55.720130 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.40s 2025-05-19 14:47:55.720148 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.13s 2025-05-19 14:47:55.720166 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.00s 2025-05-19 14:47:55.720185 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.74s 2025-05-19 14:47:55.720203 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.50s 2025-05-19 14:47:55.720222 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.50s 2025-05-19 14:47:55.720240 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.39s 2025-05-19 14:47:55.720272 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.12s 2025-05-19 14:47:55.720290 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.12s 2025-05-19 14:47:55.720305 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.99s 2025-05-19 14:47:55.720324 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-05-19 14:47:55.720343 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-05-19 14:47:55.720360 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-05-19 14:47:55.720379 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.57s 2025-05-19 14:47:55.720411 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2025-05-19 14:47:55.720429 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.47s 2025-05-19 14:47:55.720449 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-05-19 14:47:55.720467 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2025-05-19 14:47:58.762794 | orchestrator | 2025-05-19 14:47:58 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:47:58.762903 | orchestrator | 2025-05-19 14:47:58 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:47:58.762919 | orchestrator | 2025-05-19 14:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:01.810719 | orchestrator | 2025-05-19 14:48:01 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:01.812210 | orchestrator | 2025-05-19 14:48:01 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:01.812279 | orchestrator | 2025-05-19 14:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:04.860242 | orchestrator | 2025-05-19 14:48:04 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:04.861172 | orchestrator | 2025-05-19 14:48:04 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:04.861222 | orchestrator | 2025-05-19 14:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:07.919300 | orchestrator | 2025-05-19 14:48:07 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:07.920512 | orchestrator | 2025-05-19 14:48:07 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:07.920555 | orchestrator | 2025-05-19 14:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:10.991052 | orchestrator | 2025-05-19 14:48:10 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:10.992484 | orchestrator | 2025-05-19 14:48:10 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:10.992566 | orchestrator | 2025-05-19 14:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:14.049089 | orchestrator | 2025-05-19 14:48:14 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:14.050937 | orchestrator | 2025-05-19 14:48:14 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:14.050975 | orchestrator | 2025-05-19 14:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:17.095087 | orchestrator | 2025-05-19 14:48:17 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:17.097292 | orchestrator | 2025-05-19 14:48:17 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:17.097446 | orchestrator | 2025-05-19 14:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:20.160963 | orchestrator | 2025-05-19 14:48:20 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:20.162792 | orchestrator | 2025-05-19 14:48:20 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:20.162845 | orchestrator | 2025-05-19 14:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:23.209384 | orchestrator | 2025-05-19 14:48:23 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:23.211607 | orchestrator | 2025-05-19 14:48:23 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:23.211722 | orchestrator | 2025-05-19 14:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:26.259164 | orchestrator | 2025-05-19 14:48:26 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:26.259882 | orchestrator | 2025-05-19 14:48:26 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:26.259934 | orchestrator | 2025-05-19 14:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:29.305089 | orchestrator | 2025-05-19 14:48:29 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:29.306799 | orchestrator | 2025-05-19 14:48:29 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:29.306838 | orchestrator | 2025-05-19 14:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:32.357279 | orchestrator | 2025-05-19 14:48:32 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:32.359865 | orchestrator | 2025-05-19 14:48:32 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:32.359897 | orchestrator | 2025-05-19 14:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:35.409564 | orchestrator | 2025-05-19 14:48:35 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:35.411502 | orchestrator | 2025-05-19 14:48:35 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state STARTED 2025-05-19 14:48:35.411526 | orchestrator | 2025-05-19 14:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:38.462188 | orchestrator | 2025-05-19 14:48:38 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:38.465804 | orchestrator | 2025-05-19 14:48:38 | INFO  | Task 88b2bf32-cb96-4617-93f3-0116cbffb51b is in state SUCCESS 2025-05-19 14:48:38.469775 | orchestrator | 2025-05-19 14:48:38 | INFO  | Task 727d7fdb-fc1f-41e7-9f68-e8b87ad13a6b is in state STARTED 2025-05-19 14:48:38.469826 | orchestrator | 2025-05-19 14:48:38 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:38.472765 | orchestrator | 2025-05-19 14:48:38 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:38.472814 | orchestrator | 2025-05-19 14:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:41.532960 | orchestrator | 2025-05-19 14:48:41 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:41.533055 | orchestrator | 2025-05-19 14:48:41 | INFO  | Task 727d7fdb-fc1f-41e7-9f68-e8b87ad13a6b is in state STARTED 2025-05-19 14:48:41.533232 | orchestrator | 2025-05-19 14:48:41 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:41.533973 | orchestrator | 2025-05-19 14:48:41 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:41.534009 | orchestrator | 2025-05-19 14:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:44.592109 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:44.593034 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:44.594355 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:44.594434 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task 727d7fdb-fc1f-41e7-9f68-e8b87ad13a6b is in state SUCCESS 2025-05-19 14:48:44.596163 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:44.597882 | orchestrator | 2025-05-19 14:48:44 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:44.597913 | orchestrator | 2025-05-19 14:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:47.629578 | orchestrator | 2025-05-19 14:48:47 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:47.630319 | orchestrator | 2025-05-19 14:48:47 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state STARTED 2025-05-19 14:48:47.634834 | orchestrator | 2025-05-19 14:48:47 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:47.635826 | orchestrator | 2025-05-19 14:48:47 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:47.636810 | orchestrator | 2025-05-19 14:48:47 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:47.636832 | orchestrator | 2025-05-19 14:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:50.669490 | orchestrator | 2025-05-19 14:48:50 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:50.670505 | orchestrator | 2025-05-19 14:48:50 | INFO  | Task f369ed59-4b34-49a5-926e-ee478ab48936 is in state SUCCESS 2025-05-19 14:48:50.672118 | orchestrator | 2025-05-19 14:48:50.672153 | orchestrator | 2025-05-19 14:48:50.672165 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-19 14:48:50.672177 | orchestrator | 2025-05-19 14:48:50.672188 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-19 14:48:50.672199 | orchestrator | Monday 19 May 2025 14:47:48 +0000 (0:00:00.225) 0:00:00.225 ************ 2025-05-19 14:48:50.672210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-19 14:48:50.672276 | orchestrator | 2025-05-19 14:48:50.672288 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-19 14:48:50.672299 | orchestrator | Monday 19 May 2025 14:47:48 +0000 (0:00:00.210) 0:00:00.436 ************ 2025-05-19 14:48:50.672310 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-19 14:48:50.672322 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-19 14:48:50.672333 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-19 14:48:50.672345 | orchestrator | 2025-05-19 14:48:50.672356 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-19 14:48:50.672367 | orchestrator | Monday 19 May 2025 14:47:49 +0000 (0:00:01.177) 0:00:01.614 ************ 2025-05-19 14:48:50.672378 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-19 14:48:50.672389 | orchestrator | 2025-05-19 14:48:50.672400 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-19 14:48:50.672411 | orchestrator | Monday 19 May 2025 14:47:50 +0000 (0:00:01.095) 0:00:02.709 ************ 2025-05-19 14:48:50.672422 | orchestrator | changed: [testbed-manager] 2025-05-19 14:48:50.672433 | orchestrator | 2025-05-19 14:48:50.672451 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-19 14:48:50.672462 | orchestrator | Monday 19 May 2025 14:47:51 +0000 (0:00:00.900) 0:00:03.610 ************ 2025-05-19 14:48:50.672473 | orchestrator | changed: [testbed-manager] 2025-05-19 14:48:50.672485 | orchestrator | 2025-05-19 14:48:50.672495 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-19 14:48:50.672506 | orchestrator | Monday 19 May 2025 14:47:52 +0000 (0:00:00.880) 0:00:04.490 ************ 2025-05-19 14:48:50.672517 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-19 14:48:50.672528 | orchestrator | ok: [testbed-manager] 2025-05-19 14:48:50.672558 | orchestrator | 2025-05-19 14:48:50.672569 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-19 14:48:50.672580 | orchestrator | Monday 19 May 2025 14:48:28 +0000 (0:00:35.847) 0:00:40.337 ************ 2025-05-19 14:48:50.672591 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-19 14:48:50.672602 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-19 14:48:50.672618 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-19 14:48:50.672628 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-19 14:48:50.672645 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-19 14:48:50.672660 | orchestrator | 2025-05-19 14:48:50.672671 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-19 14:48:50.672694 | orchestrator | Monday 19 May 2025 14:48:32 +0000 (0:00:03.833) 0:00:44.171 ************ 2025-05-19 14:48:50.672736 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-19 14:48:50.672756 | orchestrator | 2025-05-19 14:48:50.672769 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-19 14:48:50.672782 | orchestrator | Monday 19 May 2025 14:48:32 +0000 (0:00:00.437) 0:00:44.609 ************ 2025-05-19 14:48:50.672794 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:48:50.672813 | orchestrator | 2025-05-19 14:48:50.672825 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-19 14:48:50.672837 | orchestrator | Monday 19 May 2025 14:48:32 +0000 (0:00:00.129) 0:00:44.738 ************ 2025-05-19 14:48:50.672849 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:48:50.672860 | orchestrator | 2025-05-19 14:48:50.672872 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-19 14:48:50.672884 | orchestrator | Monday 19 May 2025 14:48:32 +0000 (0:00:00.300) 0:00:45.039 ************ 2025-05-19 14:48:50.672902 | orchestrator | changed: [testbed-manager] 2025-05-19 14:48:50.672914 | orchestrator | 2025-05-19 14:48:50.672926 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-19 14:48:50.672938 | orchestrator | Monday 19 May 2025 14:48:34 +0000 (0:00:01.579) 0:00:46.618 ************ 2025-05-19 14:48:50.672950 | orchestrator | changed: [testbed-manager] 2025-05-19 14:48:50.672962 | orchestrator | 2025-05-19 14:48:50.672974 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-19 14:48:50.672986 | orchestrator | Monday 19 May 2025 14:48:35 +0000 (0:00:00.673) 0:00:47.292 ************ 2025-05-19 14:48:50.672998 | orchestrator | changed: [testbed-manager] 2025-05-19 14:48:50.673010 | orchestrator | 2025-05-19 14:48:50.673021 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-19 14:48:50.673034 | orchestrator | Monday 19 May 2025 14:48:35 +0000 (0:00:00.561) 0:00:47.853 ************ 2025-05-19 14:48:50.673046 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-19 14:48:50.673058 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-19 14:48:50.673070 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-19 14:48:50.673080 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-19 14:48:50.673091 | orchestrator | 2025-05-19 14:48:50.673102 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:48:50.673113 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 14:48:50.673124 | orchestrator | 2025-05-19 14:48:50.673135 | orchestrator | 2025-05-19 14:48:50.673164 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:48:50.673176 | orchestrator | Monday 19 May 2025 14:48:37 +0000 (0:00:01.362) 0:00:49.216 ************ 2025-05-19 14:48:50.673187 | orchestrator | =============================================================================== 2025-05-19 14:48:50.673198 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.85s 2025-05-19 14:48:50.673208 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.83s 2025-05-19 14:48:50.673227 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.58s 2025-05-19 14:48:50.673238 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.36s 2025-05-19 14:48:50.673249 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2025-05-19 14:48:50.673260 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2025-05-19 14:48:50.673270 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2025-05-19 14:48:50.673281 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-05-19 14:48:50.673291 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.67s 2025-05-19 14:48:50.673302 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2025-05-19 14:48:50.673313 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-05-19 14:48:50.673323 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-05-19 14:48:50.673334 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-19 14:48:50.673345 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-05-19 14:48:50.673355 | orchestrator | 2025-05-19 14:48:50.673366 | orchestrator | 2025-05-19 14:48:50.673377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:48:50.673387 | orchestrator | 2025-05-19 14:48:50.673398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:48:50.673409 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.156) 0:00:00.156 ************ 2025-05-19 14:48:50.673419 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.673430 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.673441 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.673452 | orchestrator | 2025-05-19 14:48:50.673462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:48:50.673473 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.256) 0:00:00.413 ************ 2025-05-19 14:48:50.673484 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 14:48:50.673495 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 14:48:50.673505 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 14:48:50.673516 | orchestrator | 2025-05-19 14:48:50.673527 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-19 14:48:50.673537 | orchestrator | 2025-05-19 14:48:50.673548 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-19 14:48:50.673559 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.541) 0:00:00.954 ************ 2025-05-19 14:48:50.673570 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.673580 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.673591 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.673602 | orchestrator | 2025-05-19 14:48:50.673612 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:48:50.673624 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:48:50.673634 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:48:50.673645 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:48:50.673656 | orchestrator | 2025-05-19 14:48:50.673666 | orchestrator | 2025-05-19 14:48:50.673677 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:48:50.673688 | orchestrator | Monday 19 May 2025 14:48:42 +0000 (0:00:00.642) 0:00:01.597 ************ 2025-05-19 14:48:50.673699 | orchestrator | =============================================================================== 2025-05-19 14:48:50.673738 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2025-05-19 14:48:50.673769 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-05-19 14:48:50.673787 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-05-19 14:48:50.673798 | orchestrator | 2025-05-19 14:48:50.673809 | orchestrator | 2025-05-19 14:48:50.673820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:48:50.673830 | orchestrator | 2025-05-19 14:48:50.673841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:48:50.673851 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.254) 0:00:00.254 ************ 2025-05-19 14:48:50.673862 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.673872 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.673883 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.673893 | orchestrator | 2025-05-19 14:48:50.673904 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:48:50.673915 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.262) 0:00:00.517 ************ 2025-05-19 14:48:50.673925 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 14:48:50.673936 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 14:48:50.673947 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 14:48:50.673958 | orchestrator | 2025-05-19 14:48:50.673974 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-19 14:48:50.673985 | orchestrator | 2025-05-19 14:48:50.674087 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.674104 | orchestrator | Monday 19 May 2025 14:46:11 +0000 (0:00:00.393) 0:00:00.910 ************ 2025-05-19 14:48:50.674115 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:48:50.674126 | orchestrator | 2025-05-19 14:48:50.674136 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-19 14:48:50.674147 | orchestrator | Monday 19 May 2025 14:46:12 +0000 (0:00:00.502) 0:00:01.413 ************ 2025-05-19 14:48:50.674163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674334 | orchestrator | 2025-05-19 14:48:50.674345 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-19 14:48:50.674356 | orchestrator | Monday 19 May 2025 14:46:13 +0000 (0:00:01.638) 0:00:03.052 ************ 2025-05-19 14:48:50.674366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-19 14:48:50.674377 | orchestrator | 2025-05-19 14:48:50.674388 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-19 14:48:50.674399 | orchestrator | Monday 19 May 2025 14:46:14 +0000 (0:00:00.835) 0:00:03.888 ************ 2025-05-19 14:48:50.674410 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.674420 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.674431 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.674442 | orchestrator | 2025-05-19 14:48:50.674452 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-19 14:48:50.674463 | orchestrator | Monday 19 May 2025 14:46:15 +0000 (0:00:00.454) 0:00:04.342 ************ 2025-05-19 14:48:50.674474 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:48:50.674485 | orchestrator | 2025-05-19 14:48:50.674495 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.674506 | orchestrator | Monday 19 May 2025 14:46:15 +0000 (0:00:00.662) 0:00:05.004 ************ 2025-05-19 14:48:50.674517 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:48:50.674532 | orchestrator | 2025-05-19 14:48:50.674549 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-19 14:48:50.674560 | orchestrator | Monday 19 May 2025 14:46:16 +0000 (0:00:00.509) 0:00:05.514 ************ 2025-05-19 14:48:50.674572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.674614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.674723 | orchestrator | 2025-05-19 14:48:50.674736 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-19 14:48:50.674747 | orchestrator | Monday 19 May 2025 14:46:19 +0000 (0:00:03.271) 0:00:08.786 ************ 2025-05-19 14:48:50.674758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.674783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.674795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.674818 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.674831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.674843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.674854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.674865 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.674888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.674900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.674917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.674929 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.674939 | orchestrator | 2025-05-19 14:48:50.674950 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-19 14:48:50.674961 | orchestrator | Monday 19 May 2025 14:46:20 +0000 (0:00:00.646) 0:00:09.432 ************ 2025-05-19 14:48:50.674973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.674984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.674996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.675007 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.675029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.675048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.675071 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.675083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 14:48:50.675094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 14:48:50.675129 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.675145 | orchestrator | 2025-05-19 14:48:50.675156 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-19 14:48:50.675167 | orchestrator | Monday 19 May 2025 14:46:21 +0000 (0:00:00.727) 0:00:10.160 ************ 2025-05-19 14:48:50.675187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675314 | orchestrator | 2025-05-19 14:48:50.675325 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-19 14:48:50.675344 | orchestrator | Monday 19 May 2025 14:46:24 +0000 (0:00:03.354) 0:00:13.514 ************ 2025-05-19 14:48:50.675368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675496 | orchestrator | 2025-05-19 14:48:50.675507 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-19 14:48:50.675518 | orchestrator | Monday 19 May 2025 14:46:29 +0000 (0:00:05.001) 0:00:18.516 ************ 2025-05-19 14:48:50.675529 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.675540 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:48:50.675551 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:48:50.675561 | orchestrator | 2025-05-19 14:48:50.675572 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-19 14:48:50.675583 | orchestrator | Monday 19 May 2025 14:46:30 +0000 (0:00:01.297) 0:00:19.813 ************ 2025-05-19 14:48:50.675593 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.675604 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.675615 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.675625 | orchestrator | 2025-05-19 14:48:50.675636 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-19 14:48:50.675646 | orchestrator | Monday 19 May 2025 14:46:31 +0000 (0:00:00.519) 0:00:20.332 ************ 2025-05-19 14:48:50.675657 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.675668 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.675678 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.675689 | orchestrator | 2025-05-19 14:48:50.675699 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-19 14:48:50.675738 | orchestrator | Monday 19 May 2025 14:46:31 +0000 (0:00:00.451) 0:00:20.784 ************ 2025-05-19 14:48:50.675748 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.675759 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.675770 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.675780 | orchestrator | 2025-05-19 14:48:50.675791 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-19 14:48:50.675808 | orchestrator | Monday 19 May 2025 14:46:31 +0000 (0:00:00.270) 0:00:21.054 ************ 2025-05-19 14:48:50.675824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.675900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 14:48:50.675923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.675958 | orchestrator | 2025-05-19 14:48:50.675969 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.675980 | orchestrator | Monday 19 May 2025 14:46:34 +0000 (0:00:02.243) 0:00:23.298 ************ 2025-05-19 14:48:50.675990 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.676001 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.676012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.676022 | orchestrator | 2025-05-19 14:48:50.676033 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-19 14:48:50.676044 | orchestrator | Monday 19 May 2025 14:46:34 +0000 (0:00:00.321) 0:00:23.619 ************ 2025-05-19 14:48:50.676055 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 14:48:50.676066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 14:48:50.676076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 14:48:50.676092 | orchestrator | 2025-05-19 14:48:50.676103 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-19 14:48:50.676114 | orchestrator | Monday 19 May 2025 14:46:36 +0000 (0:00:01.964) 0:00:25.583 ************ 2025-05-19 14:48:50.676124 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:48:50.676135 | orchestrator | 2025-05-19 14:48:50.676146 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-19 14:48:50.676156 | orchestrator | Monday 19 May 2025 14:46:37 +0000 (0:00:00.838) 0:00:26.422 ************ 2025-05-19 14:48:50.676167 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.676177 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.676188 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.676198 | orchestrator | 2025-05-19 14:48:50.676209 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-19 14:48:50.676219 | orchestrator | Monday 19 May 2025 14:46:37 +0000 (0:00:00.481) 0:00:26.903 ************ 2025-05-19 14:48:50.676230 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:48:50.676240 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 14:48:50.676251 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 14:48:50.676261 | orchestrator | 2025-05-19 14:48:50.676272 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-19 14:48:50.676283 | orchestrator | Monday 19 May 2025 14:46:38 +0000 (0:00:00.943) 0:00:27.847 ************ 2025-05-19 14:48:50.676293 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.676304 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.676314 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.676325 | orchestrator | 2025-05-19 14:48:50.676335 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-19 14:48:50.676346 | orchestrator | Monday 19 May 2025 14:46:39 +0000 (0:00:00.277) 0:00:28.125 ************ 2025-05-19 14:48:50.676356 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 14:48:50.676367 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 14:48:50.676377 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 14:48:50.676388 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 14:48:50.676399 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 14:48:50.676419 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 14:48:50.676431 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 14:48:50.676442 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 14:48:50.676452 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 14:48:50.676463 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 14:48:50.676473 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 14:48:50.676484 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 14:48:50.676494 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 14:48:50.676505 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 14:48:50.676516 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 14:48:50.676526 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:48:50.676537 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:48:50.676553 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:48:50.676564 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:48:50.676575 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:48:50.676585 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:48:50.676596 | orchestrator | 2025-05-19 14:48:50.676606 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-19 14:48:50.676617 | orchestrator | Monday 19 May 2025 14:46:47 +0000 (0:00:08.555) 0:00:36.680 ************ 2025-05-19 14:48:50.676627 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:48:50.676638 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:48:50.676648 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:48:50.676658 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:48:50.676669 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:48:50.676680 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:48:50.676690 | orchestrator | 2025-05-19 14:48:50.676725 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-19 14:48:50.676736 | orchestrator | Monday 19 May 2025 14:46:49 +0000 (0:00:02.383) 0:00:39.064 ************ 2025-05-19 14:48:50.676748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.676772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.676786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 14:48:50.676804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 14:48:50.676887 | orchestrator | 2025-05-19 14:48:50.676899 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.676910 | orchestrator | Monday 19 May 2025 14:46:52 +0000 (0:00:02.132) 0:00:41.197 ************ 2025-05-19 14:48:50.676920 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.676931 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.676942 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.676952 | orchestrator | 2025-05-19 14:48:50.676963 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-19 14:48:50.676973 | orchestrator | Monday 19 May 2025 14:46:52 +0000 (0:00:00.278) 0:00:41.475 ************ 2025-05-19 14:48:50.676983 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.676994 | orchestrator | 2025-05-19 14:48:50.677004 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-19 14:48:50.677015 | orchestrator | Monday 19 May 2025 14:46:54 +0000 (0:00:02.152) 0:00:43.627 ************ 2025-05-19 14:48:50.677025 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677035 | orchestrator | 2025-05-19 14:48:50.677046 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-19 14:48:50.677056 | orchestrator | Monday 19 May 2025 14:46:57 +0000 (0:00:02.518) 0:00:46.146 ************ 2025-05-19 14:48:50.677067 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.677077 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.677087 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.677098 | orchestrator | 2025-05-19 14:48:50.677108 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-19 14:48:50.677119 | orchestrator | Monday 19 May 2025 14:46:57 +0000 (0:00:00.919) 0:00:47.066 ************ 2025-05-19 14:48:50.677129 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.677140 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.677150 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.677160 | orchestrator | 2025-05-19 14:48:50.677171 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-19 14:48:50.677181 | orchestrator | Monday 19 May 2025 14:46:58 +0000 (0:00:00.265) 0:00:47.331 ************ 2025-05-19 14:48:50.677192 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.677203 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.677213 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.677224 | orchestrator | 2025-05-19 14:48:50.677234 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-19 14:48:50.677245 | orchestrator | Monday 19 May 2025 14:46:58 +0000 (0:00:00.361) 0:00:47.692 ************ 2025-05-19 14:48:50.677255 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677266 | orchestrator | 2025-05-19 14:48:50.677276 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-19 14:48:50.677287 | orchestrator | Monday 19 May 2025 14:47:11 +0000 (0:00:13.370) 0:01:01.062 ************ 2025-05-19 14:48:50.677297 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677308 | orchestrator | 2025-05-19 14:48:50.677319 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 14:48:50.677329 | orchestrator | Monday 19 May 2025 14:47:20 +0000 (0:00:08.836) 0:01:09.899 ************ 2025-05-19 14:48:50.677339 | orchestrator | 2025-05-19 14:48:50.677350 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 14:48:50.677361 | orchestrator | Monday 19 May 2025 14:47:21 +0000 (0:00:00.215) 0:01:10.114 ************ 2025-05-19 14:48:50.677376 | orchestrator | 2025-05-19 14:48:50.677387 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 14:48:50.677397 | orchestrator | Monday 19 May 2025 14:47:21 +0000 (0:00:00.060) 0:01:10.175 ************ 2025-05-19 14:48:50.677408 | orchestrator | 2025-05-19 14:48:50.677418 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-19 14:48:50.677429 | orchestrator | Monday 19 May 2025 14:47:21 +0000 (0:00:00.062) 0:01:10.237 ************ 2025-05-19 14:48:50.677439 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677450 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:48:50.677460 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:48:50.677471 | orchestrator | 2025-05-19 14:48:50.677481 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-19 14:48:50.677492 | orchestrator | Monday 19 May 2025 14:47:44 +0000 (0:00:23.453) 0:01:33.691 ************ 2025-05-19 14:48:50.677502 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677513 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:48:50.677524 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:48:50.677534 | orchestrator | 2025-05-19 14:48:50.677545 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-19 14:48:50.677555 | orchestrator | Monday 19 May 2025 14:47:54 +0000 (0:00:09.936) 0:01:43.628 ************ 2025-05-19 14:48:50.677566 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677580 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:48:50.677597 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:48:50.677608 | orchestrator | 2025-05-19 14:48:50.677618 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.677629 | orchestrator | Monday 19 May 2025 14:48:01 +0000 (0:00:06.528) 0:01:50.156 ************ 2025-05-19 14:48:50.677639 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:48:50.677650 | orchestrator | 2025-05-19 14:48:50.677661 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-19 14:48:50.677671 | orchestrator | Monday 19 May 2025 14:48:01 +0000 (0:00:00.673) 0:01:50.830 ************ 2025-05-19 14:48:50.677688 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:48:50.677720 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.677731 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:48:50.677742 | orchestrator | 2025-05-19 14:48:50.677752 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-19 14:48:50.677763 | orchestrator | Monday 19 May 2025 14:48:02 +0000 (0:00:00.675) 0:01:51.505 ************ 2025-05-19 14:48:50.677773 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:48:50.677784 | orchestrator | 2025-05-19 14:48:50.677795 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-19 14:48:50.677805 | orchestrator | Monday 19 May 2025 14:48:04 +0000 (0:00:01.660) 0:01:53.166 ************ 2025-05-19 14:48:50.677816 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-19 14:48:50.677826 | orchestrator | 2025-05-19 14:48:50.677837 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-19 14:48:50.677848 | orchestrator | Monday 19 May 2025 14:48:13 +0000 (0:00:09.191) 0:02:02.357 ************ 2025-05-19 14:48:50.677858 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-19 14:48:50.677869 | orchestrator | 2025-05-19 14:48:50.677879 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-19 14:48:50.677890 | orchestrator | Monday 19 May 2025 14:48:32 +0000 (0:00:19.535) 0:02:21.893 ************ 2025-05-19 14:48:50.677900 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-19 14:48:50.677911 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-19 14:48:50.677921 | orchestrator | 2025-05-19 14:48:50.677932 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-19 14:48:50.677948 | orchestrator | Monday 19 May 2025 14:48:44 +0000 (0:00:11.970) 0:02:33.863 ************ 2025-05-19 14:48:50.677959 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.677969 | orchestrator | 2025-05-19 14:48:50.677980 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-19 14:48:50.677995 | orchestrator | Monday 19 May 2025 14:48:45 +0000 (0:00:00.238) 0:02:34.102 ************ 2025-05-19 14:48:50.678013 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.678064 | orchestrator | 2025-05-19 14:48:50.678075 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-19 14:48:50.678086 | orchestrator | Monday 19 May 2025 14:48:45 +0000 (0:00:00.088) 0:02:34.191 ************ 2025-05-19 14:48:50.678096 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.678107 | orchestrator | 2025-05-19 14:48:50.678117 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-19 14:48:50.678128 | orchestrator | Monday 19 May 2025 14:48:45 +0000 (0:00:00.153) 0:02:34.345 ************ 2025-05-19 14:48:50.678139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.678149 | orchestrator | 2025-05-19 14:48:50.678160 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-19 14:48:50.678170 | orchestrator | Monday 19 May 2025 14:48:45 +0000 (0:00:00.387) 0:02:34.732 ************ 2025-05-19 14:48:50.678181 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:48:50.678192 | orchestrator | 2025-05-19 14:48:50.678202 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 14:48:50.678213 | orchestrator | Monday 19 May 2025 14:48:49 +0000 (0:00:03.649) 0:02:38.381 ************ 2025-05-19 14:48:50.678223 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:48:50.678234 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:48:50.678245 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:48:50.678256 | orchestrator | 2025-05-19 14:48:50.678266 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:48:50.678277 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 14:48:50.678288 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 14:48:50.678299 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 14:48:50.678310 | orchestrator | 2025-05-19 14:48:50.678320 | orchestrator | 2025-05-19 14:48:50.678331 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:48:50.678342 | orchestrator | Monday 19 May 2025 14:48:50 +0000 (0:00:00.740) 0:02:39.122 ************ 2025-05-19 14:48:50.678353 | orchestrator | =============================================================================== 2025-05-19 14:48:50.678363 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.45s 2025-05-19 14:48:50.678374 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.54s 2025-05-19 14:48:50.678384 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.37s 2025-05-19 14:48:50.678395 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 11.97s 2025-05-19 14:48:50.678411 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.94s 2025-05-19 14:48:50.678428 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.19s 2025-05-19 14:48:50.678439 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.84s 2025-05-19 14:48:50.678450 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.56s 2025-05-19 14:48:50.678460 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.53s 2025-05-19 14:48:50.678471 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.00s 2025-05-19 14:48:50.678482 | orchestrator | keystone : Creating default user role ----------------------------------- 3.65s 2025-05-19 14:48:50.678498 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.35s 2025-05-19 14:48:50.678509 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.27s 2025-05-19 14:48:50.678520 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.52s 2025-05-19 14:48:50.678530 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.38s 2025-05-19 14:48:50.678541 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.24s 2025-05-19 14:48:50.678551 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.15s 2025-05-19 14:48:50.678562 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.13s 2025-05-19 14:48:50.678573 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.96s 2025-05-19 14:48:50.678583 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.66s 2025-05-19 14:48:50.678594 | orchestrator | 2025-05-19 14:48:50 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:50.678605 | orchestrator | 2025-05-19 14:48:50 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:50.678616 | orchestrator | 2025-05-19 14:48:50 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:50.678626 | orchestrator | 2025-05-19 14:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:53.712276 | orchestrator | 2025-05-19 14:48:53 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:53.713618 | orchestrator | 2025-05-19 14:48:53 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:53.714457 | orchestrator | 2025-05-19 14:48:53 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:53.715305 | orchestrator | 2025-05-19 14:48:53 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:48:53.716024 | orchestrator | 2025-05-19 14:48:53 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:53.716245 | orchestrator | 2025-05-19 14:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:56.757635 | orchestrator | 2025-05-19 14:48:56 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:56.757758 | orchestrator | 2025-05-19 14:48:56 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:56.757774 | orchestrator | 2025-05-19 14:48:56 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:56.759312 | orchestrator | 2025-05-19 14:48:56 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:48:56.760875 | orchestrator | 2025-05-19 14:48:56 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:56.760901 | orchestrator | 2025-05-19 14:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:48:59.822680 | orchestrator | 2025-05-19 14:48:59 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:48:59.824206 | orchestrator | 2025-05-19 14:48:59 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:48:59.825562 | orchestrator | 2025-05-19 14:48:59 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:48:59.827439 | orchestrator | 2025-05-19 14:48:59 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:48:59.828459 | orchestrator | 2025-05-19 14:48:59 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:48:59.828547 | orchestrator | 2025-05-19 14:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:02.863665 | orchestrator | 2025-05-19 14:49:02 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:02.864365 | orchestrator | 2025-05-19 14:49:02 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:02.864506 | orchestrator | 2025-05-19 14:49:02 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:02.867126 | orchestrator | 2025-05-19 14:49:02 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:02.867934 | orchestrator | 2025-05-19 14:49:02 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:02.867959 | orchestrator | 2025-05-19 14:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:05.897037 | orchestrator | 2025-05-19 14:49:05 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:05.897132 | orchestrator | 2025-05-19 14:49:05 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:05.900563 | orchestrator | 2025-05-19 14:49:05 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:05.900613 | orchestrator | 2025-05-19 14:49:05 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:05.901968 | orchestrator | 2025-05-19 14:49:05 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:05.902008 | orchestrator | 2025-05-19 14:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:08.936923 | orchestrator | 2025-05-19 14:49:08 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:08.937363 | orchestrator | 2025-05-19 14:49:08 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:08.938722 | orchestrator | 2025-05-19 14:49:08 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:08.939914 | orchestrator | 2025-05-19 14:49:08 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:08.940652 | orchestrator | 2025-05-19 14:49:08 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:08.940807 | orchestrator | 2025-05-19 14:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:11.973403 | orchestrator | 2025-05-19 14:49:11 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:11.973503 | orchestrator | 2025-05-19 14:49:11 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:11.973518 | orchestrator | 2025-05-19 14:49:11 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:11.973530 | orchestrator | 2025-05-19 14:49:11 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:11.979123 | orchestrator | 2025-05-19 14:49:11 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:11.979154 | orchestrator | 2025-05-19 14:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:15.007984 | orchestrator | 2025-05-19 14:49:15 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:15.008591 | orchestrator | 2025-05-19 14:49:15 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:15.009601 | orchestrator | 2025-05-19 14:49:15 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:15.011685 | orchestrator | 2025-05-19 14:49:15 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:15.012541 | orchestrator | 2025-05-19 14:49:15 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:15.012566 | orchestrator | 2025-05-19 14:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:18.059782 | orchestrator | 2025-05-19 14:49:18 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state STARTED 2025-05-19 14:49:18.059870 | orchestrator | 2025-05-19 14:49:18 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:18.060518 | orchestrator | 2025-05-19 14:49:18 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:18.061073 | orchestrator | 2025-05-19 14:49:18 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:18.061705 | orchestrator | 2025-05-19 14:49:18 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:18.061760 | orchestrator | 2025-05-19 14:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:21.107611 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task f6163725-9d95-463d-8b3a-3ea34d68db95 is in state SUCCESS 2025-05-19 14:49:21.107868 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:21.108456 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:21.109072 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:21.109559 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:21.111769 | orchestrator | 2025-05-19 14:49:21 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:21.111838 | orchestrator | 2025-05-19 14:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:24.145279 | orchestrator | 2025-05-19 14:49:24 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:24.145372 | orchestrator | 2025-05-19 14:49:24 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:24.145398 | orchestrator | 2025-05-19 14:49:24 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:24.146396 | orchestrator | 2025-05-19 14:49:24 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:24.146602 | orchestrator | 2025-05-19 14:49:24 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:24.146622 | orchestrator | 2025-05-19 14:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:27.172069 | orchestrator | 2025-05-19 14:49:27 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:27.172124 | orchestrator | 2025-05-19 14:49:27 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:27.172346 | orchestrator | 2025-05-19 14:49:27 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:27.172876 | orchestrator | 2025-05-19 14:49:27 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:27.173449 | orchestrator | 2025-05-19 14:49:27 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:27.173457 | orchestrator | 2025-05-19 14:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:30.199067 | orchestrator | 2025-05-19 14:49:30 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:30.199179 | orchestrator | 2025-05-19 14:49:30 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:30.199612 | orchestrator | 2025-05-19 14:49:30 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:30.200151 | orchestrator | 2025-05-19 14:49:30 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:30.200721 | orchestrator | 2025-05-19 14:49:30 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:30.200741 | orchestrator | 2025-05-19 14:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:33.230230 | orchestrator | 2025-05-19 14:49:33 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:33.230426 | orchestrator | 2025-05-19 14:49:33 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:33.231038 | orchestrator | 2025-05-19 14:49:33 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:33.231724 | orchestrator | 2025-05-19 14:49:33 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:33.235549 | orchestrator | 2025-05-19 14:49:33 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:33.235574 | orchestrator | 2025-05-19 14:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:36.261910 | orchestrator | 2025-05-19 14:49:36 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:36.263094 | orchestrator | 2025-05-19 14:49:36 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:36.263637 | orchestrator | 2025-05-19 14:49:36 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:36.264232 | orchestrator | 2025-05-19 14:49:36 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:36.264736 | orchestrator | 2025-05-19 14:49:36 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:36.266122 | orchestrator | 2025-05-19 14:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:39.291112 | orchestrator | 2025-05-19 14:49:39 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:39.291204 | orchestrator | 2025-05-19 14:49:39 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:39.291418 | orchestrator | 2025-05-19 14:49:39 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:39.292067 | orchestrator | 2025-05-19 14:49:39 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:39.292758 | orchestrator | 2025-05-19 14:49:39 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:39.296878 | orchestrator | 2025-05-19 14:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:42.319640 | orchestrator | 2025-05-19 14:49:42 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:42.320211 | orchestrator | 2025-05-19 14:49:42 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:42.322117 | orchestrator | 2025-05-19 14:49:42 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:42.322942 | orchestrator | 2025-05-19 14:49:42 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:42.326568 | orchestrator | 2025-05-19 14:49:42 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:42.326634 | orchestrator | 2025-05-19 14:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:45.352237 | orchestrator | 2025-05-19 14:49:45 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:45.352682 | orchestrator | 2025-05-19 14:49:45 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:45.353467 | orchestrator | 2025-05-19 14:49:45 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:45.354141 | orchestrator | 2025-05-19 14:49:45 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:45.355007 | orchestrator | 2025-05-19 14:49:45 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:45.355031 | orchestrator | 2025-05-19 14:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:48.378267 | orchestrator | 2025-05-19 14:49:48 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:48.378851 | orchestrator | 2025-05-19 14:49:48 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:48.379561 | orchestrator | 2025-05-19 14:49:48 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:48.380501 | orchestrator | 2025-05-19 14:49:48 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:48.381023 | orchestrator | 2025-05-19 14:49:48 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:48.381055 | orchestrator | 2025-05-19 14:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:51.406113 | orchestrator | 2025-05-19 14:49:51 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:51.406597 | orchestrator | 2025-05-19 14:49:51 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:51.407213 | orchestrator | 2025-05-19 14:49:51 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:51.409114 | orchestrator | 2025-05-19 14:49:51 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:51.409599 | orchestrator | 2025-05-19 14:49:51 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:51.409765 | orchestrator | 2025-05-19 14:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:54.444189 | orchestrator | 2025-05-19 14:49:54 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:54.444650 | orchestrator | 2025-05-19 14:49:54 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:54.445158 | orchestrator | 2025-05-19 14:49:54 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:54.446972 | orchestrator | 2025-05-19 14:49:54 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:54.447681 | orchestrator | 2025-05-19 14:49:54 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:54.447720 | orchestrator | 2025-05-19 14:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:49:57.482187 | orchestrator | 2025-05-19 14:49:57 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:49:57.483178 | orchestrator | 2025-05-19 14:49:57 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:49:57.483569 | orchestrator | 2025-05-19 14:49:57 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:49:57.484320 | orchestrator | 2025-05-19 14:49:57 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:49:57.486010 | orchestrator | 2025-05-19 14:49:57 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state STARTED 2025-05-19 14:49:57.486090 | orchestrator | 2025-05-19 14:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:00.510597 | orchestrator | 2025-05-19 14:50:00 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:00.510979 | orchestrator | 2025-05-19 14:50:00 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:00.511751 | orchestrator | 2025-05-19 14:50:00 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:00.512225 | orchestrator | 2025-05-19 14:50:00 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:00.513141 | orchestrator | 2025-05-19 14:50:00 | INFO  | Task 0efc2ec4-d1e5-46d5-a567-9def9f8ec5b3 is in state SUCCESS 2025-05-19 14:50:00.513153 | orchestrator | 2025-05-19 14:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:00.513367 | orchestrator | 2025-05-19 14:50:00.513377 | orchestrator | 2025-05-19 14:50:00.513382 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:50:00.513387 | orchestrator | 2025-05-19 14:50:00.513392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:50:00.513397 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:00.227) 0:00:00.227 ************ 2025-05-19 14:50:00.513401 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:50:00.513406 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:50:00.513411 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:50:00.513415 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:50:00.513420 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:50:00.513424 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:50:00.513429 | orchestrator | ok: [testbed-manager] 2025-05-19 14:50:00.513433 | orchestrator | 2025-05-19 14:50:00.513438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:50:00.513442 | orchestrator | Monday 19 May 2025 14:48:48 +0000 (0:00:00.993) 0:00:01.220 ************ 2025-05-19 14:50:00.513447 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513452 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513456 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513461 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513465 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513470 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513475 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-19 14:50:00.513479 | orchestrator | 2025-05-19 14:50:00.513487 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 14:50:00.513495 | orchestrator | 2025-05-19 14:50:00.513503 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-19 14:50:00.513512 | orchestrator | Monday 19 May 2025 14:48:49 +0000 (0:00:01.548) 0:00:02.769 ************ 2025-05-19 14:50:00.513520 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-19 14:50:00.513530 | orchestrator | 2025-05-19 14:50:00.513537 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-19 14:50:00.513541 | orchestrator | Monday 19 May 2025 14:48:51 +0000 (0:00:01.249) 0:00:04.019 ************ 2025-05-19 14:50:00.513546 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-19 14:50:00.513550 | orchestrator | 2025-05-19 14:50:00.513555 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-19 14:50:00.513571 | orchestrator | Monday 19 May 2025 14:48:54 +0000 (0:00:03.451) 0:00:07.470 ************ 2025-05-19 14:50:00.513576 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-19 14:50:00.513581 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-19 14:50:00.513586 | orchestrator | 2025-05-19 14:50:00.513590 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-19 14:50:00.513595 | orchestrator | Monday 19 May 2025 14:49:00 +0000 (0:00:05.439) 0:00:12.910 ************ 2025-05-19 14:50:00.513599 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:50:00.513604 | orchestrator | 2025-05-19 14:50:00.513608 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-19 14:50:00.513613 | orchestrator | Monday 19 May 2025 14:49:02 +0000 (0:00:02.790) 0:00:15.700 ************ 2025-05-19 14:50:00.513617 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:50:00.513622 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-19 14:50:00.513626 | orchestrator | 2025-05-19 14:50:00.513638 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-19 14:50:00.513643 | orchestrator | Monday 19 May 2025 14:49:06 +0000 (0:00:03.869) 0:00:19.570 ************ 2025-05-19 14:50:00.513647 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:50:00.513652 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-19 14:50:00.513656 | orchestrator | 2025-05-19 14:50:00.513661 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-19 14:50:00.513665 | orchestrator | Monday 19 May 2025 14:49:12 +0000 (0:00:06.101) 0:00:25.672 ************ 2025-05-19 14:50:00.513670 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-19 14:50:00.513674 | orchestrator | 2025-05-19 14:50:00.513678 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:50:00.513683 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513688 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513692 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513697 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513701 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513711 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513716 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.513721 | orchestrator | 2025-05-19 14:50:00.513725 | orchestrator | 2025-05-19 14:50:00.513730 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:50:00.513734 | orchestrator | Monday 19 May 2025 14:49:17 +0000 (0:00:05.015) 0:00:30.687 ************ 2025-05-19 14:50:00.513739 | orchestrator | =============================================================================== 2025-05-19 14:50:00.513743 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.10s 2025-05-19 14:50:00.513748 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.44s 2025-05-19 14:50:00.513752 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.02s 2025-05-19 14:50:00.513760 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.87s 2025-05-19 14:50:00.513764 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.45s 2025-05-19 14:50:00.513769 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.79s 2025-05-19 14:50:00.513774 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.55s 2025-05-19 14:50:00.513778 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.25s 2025-05-19 14:50:00.513783 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2025-05-19 14:50:00.513787 | orchestrator | 2025-05-19 14:50:00.513792 | orchestrator | 2025-05-19 14:50:00.513796 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-19 14:50:00.513823 | orchestrator | 2025-05-19 14:50:00.513827 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-19 14:50:00.513832 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.235) 0:00:00.235 ************ 2025-05-19 14:50:00.513836 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513841 | orchestrator | 2025-05-19 14:50:00.513845 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-19 14:50:00.513850 | orchestrator | Monday 19 May 2025 14:48:42 +0000 (0:00:01.123) 0:00:01.359 ************ 2025-05-19 14:50:00.513854 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513859 | orchestrator | 2025-05-19 14:50:00.513863 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-19 14:50:00.513868 | orchestrator | Monday 19 May 2025 14:48:43 +0000 (0:00:00.837) 0:00:02.196 ************ 2025-05-19 14:50:00.513872 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513876 | orchestrator | 2025-05-19 14:50:00.513881 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-19 14:50:00.513885 | orchestrator | Monday 19 May 2025 14:48:44 +0000 (0:00:00.919) 0:00:03.115 ************ 2025-05-19 14:50:00.513890 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513894 | orchestrator | 2025-05-19 14:50:00.513899 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-19 14:50:00.513903 | orchestrator | Monday 19 May 2025 14:48:45 +0000 (0:00:00.987) 0:00:04.102 ************ 2025-05-19 14:50:00.513907 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513912 | orchestrator | 2025-05-19 14:50:00.513916 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-19 14:50:00.513921 | orchestrator | Monday 19 May 2025 14:48:46 +0000 (0:00:01.072) 0:00:05.175 ************ 2025-05-19 14:50:00.513925 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513930 | orchestrator | 2025-05-19 14:50:00.513934 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-19 14:50:00.513939 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:00.844) 0:00:06.020 ************ 2025-05-19 14:50:00.513943 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513948 | orchestrator | 2025-05-19 14:50:00.513954 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-19 14:50:00.513959 | orchestrator | Monday 19 May 2025 14:48:48 +0000 (0:00:01.152) 0:00:07.173 ************ 2025-05-19 14:50:00.513964 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513968 | orchestrator | 2025-05-19 14:50:00.513972 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-19 14:50:00.513977 | orchestrator | Monday 19 May 2025 14:48:49 +0000 (0:00:00.848) 0:00:08.021 ************ 2025-05-19 14:50:00.513981 | orchestrator | changed: [testbed-manager] 2025-05-19 14:50:00.513986 | orchestrator | 2025-05-19 14:50:00.513990 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-19 14:50:00.513995 | orchestrator | Monday 19 May 2025 14:49:33 +0000 (0:00:44.728) 0:00:52.750 ************ 2025-05-19 14:50:00.513999 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:50:00.514004 | orchestrator | 2025-05-19 14:50:00.514009 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 14:50:00.514042 | orchestrator | 2025-05-19 14:50:00.514048 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 14:50:00.514053 | orchestrator | Monday 19 May 2025 14:49:34 +0000 (0:00:00.174) 0:00:52.924 ************ 2025-05-19 14:50:00.514058 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:50:00.514064 | orchestrator | 2025-05-19 14:50:00.514069 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 14:50:00.514075 | orchestrator | 2025-05-19 14:50:00.514080 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 14:50:00.514085 | orchestrator | Monday 19 May 2025 14:49:35 +0000 (0:00:01.475) 0:00:54.400 ************ 2025-05-19 14:50:00.514090 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:50:00.514095 | orchestrator | 2025-05-19 14:50:00.514100 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 14:50:00.514105 | orchestrator | 2025-05-19 14:50:00.514116 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 14:50:00.514121 | orchestrator | Monday 19 May 2025 14:49:46 +0000 (0:00:11.158) 0:01:05.559 ************ 2025-05-19 14:50:00.514127 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:50:00.514132 | orchestrator | 2025-05-19 14:50:00.514140 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:50:00.514145 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 14:50:00.514151 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.514156 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.514161 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:50:00.514167 | orchestrator | 2025-05-19 14:50:00.514172 | orchestrator | 2025-05-19 14:50:00.514177 | orchestrator | 2025-05-19 14:50:00.514182 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:50:00.514187 | orchestrator | Monday 19 May 2025 14:49:57 +0000 (0:00:10.970) 0:01:16.529 ************ 2025-05-19 14:50:00.514192 | orchestrator | =============================================================================== 2025-05-19 14:50:00.514197 | orchestrator | Create admin user ------------------------------------------------------ 44.73s 2025-05-19 14:50:00.514203 | orchestrator | Restart ceph manager service ------------------------------------------- 23.60s 2025-05-19 14:50:00.514208 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2025-05-19 14:50:00.514213 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.12s 2025-05-19 14:50:00.514218 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2025-05-19 14:50:00.514224 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.99s 2025-05-19 14:50:00.514229 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.92s 2025-05-19 14:50:00.514234 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.85s 2025-05-19 14:50:00.514239 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.84s 2025-05-19 14:50:00.514244 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.84s 2025-05-19 14:50:00.514250 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-05-19 14:50:03.540996 | orchestrator | 2025-05-19 14:50:03 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:03.541637 | orchestrator | 2025-05-19 14:50:03 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:03.542228 | orchestrator | 2025-05-19 14:50:03 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:03.542990 | orchestrator | 2025-05-19 14:50:03 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:03.543133 | orchestrator | 2025-05-19 14:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:06.581276 | orchestrator | 2025-05-19 14:50:06 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:06.581783 | orchestrator | 2025-05-19 14:50:06 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:06.582879 | orchestrator | 2025-05-19 14:50:06 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:06.583552 | orchestrator | 2025-05-19 14:50:06 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:06.583584 | orchestrator | 2025-05-19 14:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:09.617165 | orchestrator | 2025-05-19 14:50:09 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:09.617269 | orchestrator | 2025-05-19 14:50:09 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:09.617748 | orchestrator | 2025-05-19 14:50:09 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:09.618282 | orchestrator | 2025-05-19 14:50:09 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:09.620281 | orchestrator | 2025-05-19 14:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:12.652314 | orchestrator | 2025-05-19 14:50:12 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:12.652402 | orchestrator | 2025-05-19 14:50:12 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:12.652418 | orchestrator | 2025-05-19 14:50:12 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:12.653496 | orchestrator | 2025-05-19 14:50:12 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:12.654143 | orchestrator | 2025-05-19 14:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:15.684734 | orchestrator | 2025-05-19 14:50:15 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:15.685258 | orchestrator | 2025-05-19 14:50:15 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:15.687019 | orchestrator | 2025-05-19 14:50:15 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:15.688029 | orchestrator | 2025-05-19 14:50:15 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:15.688205 | orchestrator | 2025-05-19 14:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:18.722749 | orchestrator | 2025-05-19 14:50:18 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:18.723422 | orchestrator | 2025-05-19 14:50:18 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:18.725745 | orchestrator | 2025-05-19 14:50:18 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:18.728557 | orchestrator | 2025-05-19 14:50:18 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:18.728585 | orchestrator | 2025-05-19 14:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:21.768349 | orchestrator | 2025-05-19 14:50:21 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:21.769525 | orchestrator | 2025-05-19 14:50:21 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:21.771671 | orchestrator | 2025-05-19 14:50:21 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:21.773050 | orchestrator | 2025-05-19 14:50:21 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:21.773774 | orchestrator | 2025-05-19 14:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:24.824526 | orchestrator | 2025-05-19 14:50:24 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:24.824711 | orchestrator | 2025-05-19 14:50:24 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:24.825769 | orchestrator | 2025-05-19 14:50:24 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:24.826762 | orchestrator | 2025-05-19 14:50:24 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:24.826961 | orchestrator | 2025-05-19 14:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:27.871952 | orchestrator | 2025-05-19 14:50:27 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:27.881677 | orchestrator | 2025-05-19 14:50:27 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:27.886314 | orchestrator | 2025-05-19 14:50:27 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:27.890225 | orchestrator | 2025-05-19 14:50:27 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:27.890706 | orchestrator | 2025-05-19 14:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:30.936474 | orchestrator | 2025-05-19 14:50:30 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:30.938524 | orchestrator | 2025-05-19 14:50:30 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:30.939015 | orchestrator | 2025-05-19 14:50:30 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:30.940025 | orchestrator | 2025-05-19 14:50:30 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:30.940052 | orchestrator | 2025-05-19 14:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:33.973474 | orchestrator | 2025-05-19 14:50:33 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:33.974075 | orchestrator | 2025-05-19 14:50:33 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:33.974575 | orchestrator | 2025-05-19 14:50:33 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:33.975480 | orchestrator | 2025-05-19 14:50:33 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:33.975502 | orchestrator | 2025-05-19 14:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:37.014102 | orchestrator | 2025-05-19 14:50:37 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:37.016008 | orchestrator | 2025-05-19 14:50:37 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:37.016047 | orchestrator | 2025-05-19 14:50:37 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:37.017221 | orchestrator | 2025-05-19 14:50:37 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:37.017244 | orchestrator | 2025-05-19 14:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:40.058990 | orchestrator | 2025-05-19 14:50:40 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:40.059425 | orchestrator | 2025-05-19 14:50:40 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:40.062185 | orchestrator | 2025-05-19 14:50:40 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:40.064172 | orchestrator | 2025-05-19 14:50:40 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:40.064237 | orchestrator | 2025-05-19 14:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:43.108382 | orchestrator | 2025-05-19 14:50:43 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:43.108490 | orchestrator | 2025-05-19 14:50:43 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:43.110309 | orchestrator | 2025-05-19 14:50:43 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:43.112040 | orchestrator | 2025-05-19 14:50:43 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:43.112094 | orchestrator | 2025-05-19 14:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:46.155636 | orchestrator | 2025-05-19 14:50:46 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:46.157167 | orchestrator | 2025-05-19 14:50:46 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:46.158365 | orchestrator | 2025-05-19 14:50:46 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:46.160288 | orchestrator | 2025-05-19 14:50:46 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:46.160383 | orchestrator | 2025-05-19 14:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:49.212309 | orchestrator | 2025-05-19 14:50:49 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:49.214506 | orchestrator | 2025-05-19 14:50:49 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:49.216472 | orchestrator | 2025-05-19 14:50:49 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:49.219524 | orchestrator | 2025-05-19 14:50:49 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:49.219553 | orchestrator | 2025-05-19 14:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:52.259724 | orchestrator | 2025-05-19 14:50:52 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:52.260745 | orchestrator | 2025-05-19 14:50:52 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:52.261850 | orchestrator | 2025-05-19 14:50:52 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:52.263017 | orchestrator | 2025-05-19 14:50:52 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:52.263047 | orchestrator | 2025-05-19 14:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:55.307677 | orchestrator | 2025-05-19 14:50:55 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:55.307997 | orchestrator | 2025-05-19 14:50:55 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:55.309179 | orchestrator | 2025-05-19 14:50:55 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:55.309628 | orchestrator | 2025-05-19 14:50:55 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:55.309700 | orchestrator | 2025-05-19 14:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:50:58.360050 | orchestrator | 2025-05-19 14:50:58 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:50:58.360143 | orchestrator | 2025-05-19 14:50:58 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:50:58.360159 | orchestrator | 2025-05-19 14:50:58 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:50:58.360171 | orchestrator | 2025-05-19 14:50:58 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:50:58.360182 | orchestrator | 2025-05-19 14:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:01.384694 | orchestrator | 2025-05-19 14:51:01 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:01.391014 | orchestrator | 2025-05-19 14:51:01 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:01.391561 | orchestrator | 2025-05-19 14:51:01 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:01.392319 | orchestrator | 2025-05-19 14:51:01 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:01.392347 | orchestrator | 2025-05-19 14:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:04.418390 | orchestrator | 2025-05-19 14:51:04 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:04.418821 | orchestrator | 2025-05-19 14:51:04 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:04.419659 | orchestrator | 2025-05-19 14:51:04 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:04.420450 | orchestrator | 2025-05-19 14:51:04 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:04.420471 | orchestrator | 2025-05-19 14:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:07.448390 | orchestrator | 2025-05-19 14:51:07 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:07.448478 | orchestrator | 2025-05-19 14:51:07 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:07.448903 | orchestrator | 2025-05-19 14:51:07 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:07.449611 | orchestrator | 2025-05-19 14:51:07 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:07.449633 | orchestrator | 2025-05-19 14:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:10.491975 | orchestrator | 2025-05-19 14:51:10 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:10.492062 | orchestrator | 2025-05-19 14:51:10 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:10.492076 | orchestrator | 2025-05-19 14:51:10 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:10.493994 | orchestrator | 2025-05-19 14:51:10 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:10.494082 | orchestrator | 2025-05-19 14:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:13.525931 | orchestrator | 2025-05-19 14:51:13 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:13.526922 | orchestrator | 2025-05-19 14:51:13 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:13.528950 | orchestrator | 2025-05-19 14:51:13 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:13.531038 | orchestrator | 2025-05-19 14:51:13 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:13.531145 | orchestrator | 2025-05-19 14:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:16.574616 | orchestrator | 2025-05-19 14:51:16 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:16.578010 | orchestrator | 2025-05-19 14:51:16 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:16.580380 | orchestrator | 2025-05-19 14:51:16 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:16.581706 | orchestrator | 2025-05-19 14:51:16 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:16.581743 | orchestrator | 2025-05-19 14:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:19.619419 | orchestrator | 2025-05-19 14:51:19 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:19.620261 | orchestrator | 2025-05-19 14:51:19 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:19.620476 | orchestrator | 2025-05-19 14:51:19 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:19.621532 | orchestrator | 2025-05-19 14:51:19 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:19.621558 | orchestrator | 2025-05-19 14:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:22.679833 | orchestrator | 2025-05-19 14:51:22 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:22.681267 | orchestrator | 2025-05-19 14:51:22 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:22.682701 | orchestrator | 2025-05-19 14:51:22 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:22.683786 | orchestrator | 2025-05-19 14:51:22 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:22.684077 | orchestrator | 2025-05-19 14:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:25.734759 | orchestrator | 2025-05-19 14:51:25 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:25.736030 | orchestrator | 2025-05-19 14:51:25 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:25.736085 | orchestrator | 2025-05-19 14:51:25 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:25.736098 | orchestrator | 2025-05-19 14:51:25 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:25.736175 | orchestrator | 2025-05-19 14:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:28.786376 | orchestrator | 2025-05-19 14:51:28 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:28.786863 | orchestrator | 2025-05-19 14:51:28 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state STARTED 2025-05-19 14:51:28.787696 | orchestrator | 2025-05-19 14:51:28 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:28.790708 | orchestrator | 2025-05-19 14:51:28 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:28.790746 | orchestrator | 2025-05-19 14:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:31.836155 | orchestrator | 2025-05-19 14:51:31 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:31.838389 | orchestrator | 2025-05-19 14:51:31 | INFO  | Task d2f7c3ae-821b-4bca-96cb-2cde906a6542 is in state SUCCESS 2025-05-19 14:51:31.840515 | orchestrator | 2025-05-19 14:51:31.840551 | orchestrator | 2025-05-19 14:51:31.840564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:51:31.840576 | orchestrator | 2025-05-19 14:51:31.840587 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:51:31.840598 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:00.234) 0:00:00.234 ************ 2025-05-19 14:51:31.840653 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:51:31.840666 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:51:31.840695 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:51:31.840707 | orchestrator | 2025-05-19 14:51:31.840718 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:51:31.840729 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:00.239) 0:00:00.473 ************ 2025-05-19 14:51:31.840740 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-19 14:51:31.840751 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-19 14:51:31.840762 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-19 14:51:31.840773 | orchestrator | 2025-05-19 14:51:31.840783 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-19 14:51:31.840794 | orchestrator | 2025-05-19 14:51:31.840806 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 14:51:31.840817 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:00.395) 0:00:00.869 ************ 2025-05-19 14:51:31.840827 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:51:31.840838 | orchestrator | 2025-05-19 14:51:31.840849 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-19 14:51:31.840859 | orchestrator | Monday 19 May 2025 14:48:49 +0000 (0:00:01.144) 0:00:02.013 ************ 2025-05-19 14:51:31.840870 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-19 14:51:31.840880 | orchestrator | 2025-05-19 14:51:31.840891 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-19 14:51:31.840926 | orchestrator | Monday 19 May 2025 14:48:53 +0000 (0:00:04.466) 0:00:06.480 ************ 2025-05-19 14:51:31.840938 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-19 14:51:31.840949 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-19 14:51:31.840960 | orchestrator | 2025-05-19 14:51:31.840970 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-19 14:51:31.840981 | orchestrator | Monday 19 May 2025 14:48:59 +0000 (0:00:05.766) 0:00:12.246 ************ 2025-05-19 14:51:31.840992 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-19 14:51:31.841003 | orchestrator | 2025-05-19 14:51:31.841013 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-19 14:51:31.841024 | orchestrator | Monday 19 May 2025 14:49:02 +0000 (0:00:02.779) 0:00:15.026 ************ 2025-05-19 14:51:31.841036 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:51:31.841046 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-19 14:51:31.841057 | orchestrator | 2025-05-19 14:51:31.841068 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-19 14:51:31.841078 | orchestrator | Monday 19 May 2025 14:49:05 +0000 (0:00:03.356) 0:00:18.382 ************ 2025-05-19 14:51:31.841089 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:51:31.841100 | orchestrator | 2025-05-19 14:51:31.841111 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-19 14:51:31.841142 | orchestrator | Monday 19 May 2025 14:49:08 +0000 (0:00:03.329) 0:00:21.712 ************ 2025-05-19 14:51:31.841153 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-19 14:51:31.841164 | orchestrator | 2025-05-19 14:51:31.841175 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-19 14:51:31.841185 | orchestrator | Monday 19 May 2025 14:49:12 +0000 (0:00:04.207) 0:00:25.919 ************ 2025-05-19 14:51:31.841226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841277 | orchestrator | 2025-05-19 14:51:31.841288 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 14:51:31.841299 | orchestrator | Monday 19 May 2025 14:49:15 +0000 (0:00:02.548) 0:00:28.467 ************ 2025-05-19 14:51:31.841310 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:51:31.841321 | orchestrator | 2025-05-19 14:51:31.841337 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-19 14:51:31.841349 | orchestrator | Monday 19 May 2025 14:49:15 +0000 (0:00:00.469) 0:00:28.937 ************ 2025-05-19 14:51:31.841359 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:51:31.841370 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.841381 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:51:31.841392 | orchestrator | 2025-05-19 14:51:31.841402 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-19 14:51:31.841418 | orchestrator | Monday 19 May 2025 14:49:19 +0000 (0:00:03.554) 0:00:32.492 ************ 2025-05-19 14:51:31.841428 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841461 | orchestrator | 2025-05-19 14:51:31.841472 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-19 14:51:31.841483 | orchestrator | Monday 19 May 2025 14:49:20 +0000 (0:00:01.502) 0:00:33.994 ************ 2025-05-19 14:51:31.841494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841505 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:51:31.841526 | orchestrator | 2025-05-19 14:51:31.841537 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-19 14:51:31.841548 | orchestrator | Monday 19 May 2025 14:49:22 +0000 (0:00:01.168) 0:00:35.163 ************ 2025-05-19 14:51:31.841558 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:51:31.841587 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:51:31.841599 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:51:31.841610 | orchestrator | 2025-05-19 14:51:31.841620 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-19 14:51:31.841631 | orchestrator | Monday 19 May 2025 14:49:23 +0000 (0:00:01.243) 0:00:36.407 ************ 2025-05-19 14:51:31.841642 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.841652 | orchestrator | 2025-05-19 14:51:31.841663 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-19 14:51:31.841674 | orchestrator | Monday 19 May 2025 14:49:23 +0000 (0:00:00.194) 0:00:36.601 ************ 2025-05-19 14:51:31.841684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.841728 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.841741 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.841752 | orchestrator | 2025-05-19 14:51:31.841762 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 14:51:31.841773 | orchestrator | Monday 19 May 2025 14:49:24 +0000 (0:00:00.489) 0:00:37.091 ************ 2025-05-19 14:51:31.841783 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:51:31.841795 | orchestrator | 2025-05-19 14:51:31.841805 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-19 14:51:31.841816 | orchestrator | Monday 19 May 2025 14:49:25 +0000 (0:00:00.978) 0:00:38.069 ************ 2025-05-19 14:51:31.841835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.841882 | orchestrator | 2025-05-19 14:51:31.841893 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-19 14:51:31.841924 | orchestrator | Monday 19 May 2025 14:49:30 +0000 (0:00:05.063) 0:00:43.133 ************ 2025-05-19 14:51:31.842145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842201 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842212 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842261 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842272 | orchestrator | 2025-05-19 14:51:31.842283 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-19 14:51:31.842294 | orchestrator | Monday 19 May 2025 14:49:33 +0000 (0:00:02.868) 0:00:46.001 ************ 2025-05-19 14:51:31.842305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842317 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842349 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 14:51:31.842383 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842394 | orchestrator | 2025-05-19 14:51:31.842405 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-19 14:51:31.842416 | orchestrator | Monday 19 May 2025 14:49:36 +0000 (0:00:03.365) 0:00:49.366 ************ 2025-05-19 14:51:31.842426 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842437 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842448 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842458 | orchestrator | 2025-05-19 14:51:31.842469 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-19 14:51:31.842480 | orchestrator | Monday 19 May 2025 14:49:39 +0000 (0:00:03.614) 0:00:52.980 ************ 2025-05-19 14:51:31.842496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.842521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.842534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.842546 | orchestrator | 2025-05-19 14:51:31.842556 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-19 14:51:31.842567 | orchestrator | Monday 19 May 2025 14:49:44 +0000 (0:00:04.746) 0:00:57.727 ************ 2025-05-19 14:51:31.842578 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:51:31.842588 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:51:31.842598 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.842609 | orchestrator | 2025-05-19 14:51:31.842620 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-19 14:51:31.842636 | orchestrator | Monday 19 May 2025 14:49:51 +0000 (0:00:06.779) 0:01:04.506 ************ 2025-05-19 14:51:31.842647 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842657 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842668 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842679 | orchestrator | 2025-05-19 14:51:31.842689 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-19 14:51:31.842707 | orchestrator | Monday 19 May 2025 14:49:59 +0000 (0:00:07.933) 0:01:12.439 ************ 2025-05-19 14:51:31.842718 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842729 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842739 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842750 | orchestrator | 2025-05-19 14:51:31.842760 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-19 14:51:31.842775 | orchestrator | Monday 19 May 2025 14:50:04 +0000 (0:00:05.545) 0:01:17.985 ************ 2025-05-19 14:51:31.842786 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842797 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842808 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842819 | orchestrator | 2025-05-19 14:51:31.842830 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-19 14:51:31.842840 | orchestrator | Monday 19 May 2025 14:50:08 +0000 (0:00:03.701) 0:01:21.687 ************ 2025-05-19 14:51:31.842851 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842872 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842883 | orchestrator | 2025-05-19 14:51:31.842894 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-19 14:51:31.842924 | orchestrator | Monday 19 May 2025 14:50:11 +0000 (0:00:02.963) 0:01:24.650 ************ 2025-05-19 14:51:31.842935 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.842946 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.842957 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.842967 | orchestrator | 2025-05-19 14:51:31.842978 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-19 14:51:31.842989 | orchestrator | Monday 19 May 2025 14:50:11 +0000 (0:00:00.240) 0:01:24.891 ************ 2025-05-19 14:51:31.843000 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 14:51:31.843012 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.843022 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 14:51:31.843033 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.843044 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 14:51:31.843055 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.843066 | orchestrator | 2025-05-19 14:51:31.843077 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-19 14:51:31.843087 | orchestrator | Monday 19 May 2025 14:50:14 +0000 (0:00:02.615) 0:01:27.506 ************ 2025-05-19 14:51:31.843099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.843132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.843146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 14:51:31.843165 | orchestrator | 2025-05-19 14:51:31.843176 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 14:51:31.843187 | orchestrator | Monday 19 May 2025 14:50:17 +0000 (0:00:03.264) 0:01:30.770 ************ 2025-05-19 14:51:31.843197 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:51:31.843208 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:51:31.843219 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:51:31.843230 | orchestrator | 2025-05-19 14:51:31.843241 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-19 14:51:31.843251 | orchestrator | Monday 19 May 2025 14:50:18 +0000 (0:00:00.249) 0:01:31.019 ************ 2025-05-19 14:51:31.843262 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843273 | orchestrator | 2025-05-19 14:51:31.843284 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-19 14:51:31.843294 | orchestrator | Monday 19 May 2025 14:50:19 +0000 (0:00:01.680) 0:01:32.700 ************ 2025-05-19 14:51:31.843305 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843315 | orchestrator | 2025-05-19 14:51:31.843326 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-19 14:51:31.843337 | orchestrator | Monday 19 May 2025 14:50:21 +0000 (0:00:01.776) 0:01:34.476 ************ 2025-05-19 14:51:31.843347 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843358 | orchestrator | 2025-05-19 14:51:31.843370 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-19 14:51:31.843388 | orchestrator | Monday 19 May 2025 14:50:23 +0000 (0:00:01.691) 0:01:36.168 ************ 2025-05-19 14:51:31.843402 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843413 | orchestrator | 2025-05-19 14:51:31.843424 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-19 14:51:31.843435 | orchestrator | Monday 19 May 2025 14:50:49 +0000 (0:00:25.836) 0:02:02.004 ************ 2025-05-19 14:51:31.843445 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843456 | orchestrator | 2025-05-19 14:51:31.843473 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 14:51:31.843485 | orchestrator | Monday 19 May 2025 14:50:51 +0000 (0:00:02.399) 0:02:04.404 ************ 2025-05-19 14:51:31.843495 | orchestrator | 2025-05-19 14:51:31.843506 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 14:51:31.843516 | orchestrator | Monday 19 May 2025 14:50:51 +0000 (0:00:00.061) 0:02:04.465 ************ 2025-05-19 14:51:31.843576 | orchestrator | 2025-05-19 14:51:31.843598 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 14:51:31.843609 | orchestrator | Monday 19 May 2025 14:50:51 +0000 (0:00:00.060) 0:02:04.526 ************ 2025-05-19 14:51:31.843620 | orchestrator | 2025-05-19 14:51:31.843631 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-19 14:51:31.843641 | orchestrator | Monday 19 May 2025 14:50:51 +0000 (0:00:00.066) 0:02:04.593 ************ 2025-05-19 14:51:31.843652 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:51:31.843663 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:51:31.843675 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:51:31.843694 | orchestrator | 2025-05-19 14:51:31.843705 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:51:31.843717 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 14:51:31.843729 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:51:31.843740 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:51:31.843758 | orchestrator | 2025-05-19 14:51:31.843769 | orchestrator | 2025-05-19 14:51:31.843780 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:51:31.843790 | orchestrator | Monday 19 May 2025 14:51:30 +0000 (0:00:39.104) 0:02:43.697 ************ 2025-05-19 14:51:31.843801 | orchestrator | =============================================================================== 2025-05-19 14:51:31.843811 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.10s 2025-05-19 14:51:31.843850 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.84s 2025-05-19 14:51:31.843863 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.93s 2025-05-19 14:51:31.843873 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.78s 2025-05-19 14:51:31.843884 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.77s 2025-05-19 14:51:31.843894 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.55s 2025-05-19 14:51:31.843960 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.06s 2025-05-19 14:51:31.843971 | orchestrator | glance : Copying over config.json files for services -------------------- 4.75s 2025-05-19 14:51:31.843982 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.47s 2025-05-19 14:51:31.843993 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.21s 2025-05-19 14:51:31.844003 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.70s 2025-05-19 14:51:31.844014 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.61s 2025-05-19 14:51:31.844024 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.55s 2025-05-19 14:51:31.844035 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.37s 2025-05-19 14:51:31.844046 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.36s 2025-05-19 14:51:31.844056 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.33s 2025-05-19 14:51:31.844067 | orchestrator | glance : Check glance containers ---------------------------------------- 3.26s 2025-05-19 14:51:31.844078 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 2.96s 2025-05-19 14:51:31.844089 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.87s 2025-05-19 14:51:31.844100 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 2.78s 2025-05-19 14:51:31.844111 | orchestrator | 2025-05-19 14:51:31 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:31.844223 | orchestrator | 2025-05-19 14:51:31 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:31.844238 | orchestrator | 2025-05-19 14:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:34.899378 | orchestrator | 2025-05-19 14:51:34 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:34.900037 | orchestrator | 2025-05-19 14:51:34 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:34.903396 | orchestrator | 2025-05-19 14:51:34 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:34.904590 | orchestrator | 2025-05-19 14:51:34 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:34.904620 | orchestrator | 2025-05-19 14:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:37.939609 | orchestrator | 2025-05-19 14:51:37 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:37.939976 | orchestrator | 2025-05-19 14:51:37 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:37.940873 | orchestrator | 2025-05-19 14:51:37 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:37.941678 | orchestrator | 2025-05-19 14:51:37 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:37.941703 | orchestrator | 2025-05-19 14:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:40.991055 | orchestrator | 2025-05-19 14:51:40 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:40.991643 | orchestrator | 2025-05-19 14:51:40 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:40.993181 | orchestrator | 2025-05-19 14:51:40 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:40.994742 | orchestrator | 2025-05-19 14:51:40 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:40.994769 | orchestrator | 2025-05-19 14:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:44.049106 | orchestrator | 2025-05-19 14:51:44 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:44.051444 | orchestrator | 2025-05-19 14:51:44 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:44.053242 | orchestrator | 2025-05-19 14:51:44 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:44.055239 | orchestrator | 2025-05-19 14:51:44 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:44.055265 | orchestrator | 2025-05-19 14:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:47.100192 | orchestrator | 2025-05-19 14:51:47 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:47.103509 | orchestrator | 2025-05-19 14:51:47 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:47.106005 | orchestrator | 2025-05-19 14:51:47 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:47.109123 | orchestrator | 2025-05-19 14:51:47 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:47.109433 | orchestrator | 2025-05-19 14:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:50.151083 | orchestrator | 2025-05-19 14:51:50 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:50.151191 | orchestrator | 2025-05-19 14:51:50 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:51:50.154136 | orchestrator | 2025-05-19 14:51:50 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:50.154523 | orchestrator | 2025-05-19 14:51:50 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:50.157188 | orchestrator | 2025-05-19 14:51:50 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:50.157219 | orchestrator | 2025-05-19 14:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:53.212504 | orchestrator | 2025-05-19 14:51:53 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:53.215087 | orchestrator | 2025-05-19 14:51:53 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:51:53.217783 | orchestrator | 2025-05-19 14:51:53 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:53.217913 | orchestrator | 2025-05-19 14:51:53 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:53.218842 | orchestrator | 2025-05-19 14:51:53 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:53.220712 | orchestrator | 2025-05-19 14:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:56.264440 | orchestrator | 2025-05-19 14:51:56 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:56.267030 | orchestrator | 2025-05-19 14:51:56 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:51:56.271678 | orchestrator | 2025-05-19 14:51:56 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:56.272487 | orchestrator | 2025-05-19 14:51:56 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:56.279219 | orchestrator | 2025-05-19 14:51:56 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:56.279270 | orchestrator | 2025-05-19 14:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:51:59.345475 | orchestrator | 2025-05-19 14:51:59 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:51:59.347802 | orchestrator | 2025-05-19 14:51:59 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:51:59.347850 | orchestrator | 2025-05-19 14:51:59 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:51:59.350103 | orchestrator | 2025-05-19 14:51:59 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:51:59.351268 | orchestrator | 2025-05-19 14:51:59 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:51:59.351295 | orchestrator | 2025-05-19 14:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:02.396340 | orchestrator | 2025-05-19 14:52:02 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:02.401143 | orchestrator | 2025-05-19 14:52:02 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:52:02.401187 | orchestrator | 2025-05-19 14:52:02 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state STARTED 2025-05-19 14:52:02.404737 | orchestrator | 2025-05-19 14:52:02 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:02.407474 | orchestrator | 2025-05-19 14:52:02 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:02.408211 | orchestrator | 2025-05-19 14:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:05.459842 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:05.465026 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state STARTED 2025-05-19 14:52:05.470069 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:05.473774 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task 689192b0-eb62-4f99-8b50-95dc5bbc5d48 is in state SUCCESS 2025-05-19 14:52:05.474194 | orchestrator | 2025-05-19 14:52:05.475862 | orchestrator | 2025-05-19 14:52:05.475887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:52:05.475895 | orchestrator | 2025-05-19 14:52:05.475903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:52:05.475910 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.200) 0:00:00.200 ************ 2025-05-19 14:52:05.475918 | orchestrator | ok: [testbed-manager] 2025-05-19 14:52:05.475956 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:52:05.475971 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:52:05.475984 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:52:05.476072 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:52:05.476083 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:52:05.476090 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:52:05.476097 | orchestrator | 2025-05-19 14:52:05.476104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:52:05.476112 | orchestrator | Monday 19 May 2025 14:48:41 +0000 (0:00:00.561) 0:00:00.761 ************ 2025-05-19 14:52:05.476120 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476133 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476147 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476160 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476172 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476184 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476192 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-19 14:52:05.476199 | orchestrator | 2025-05-19 14:52:05.476232 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-19 14:52:05.476240 | orchestrator | 2025-05-19 14:52:05.476247 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 14:52:05.476254 | orchestrator | Monday 19 May 2025 14:48:42 +0000 (0:00:00.500) 0:00:01.261 ************ 2025-05-19 14:52:05.476262 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:52:05.476270 | orchestrator | 2025-05-19 14:52:05.476278 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-19 14:52:05.476285 | orchestrator | Monday 19 May 2025 14:48:43 +0000 (0:00:01.116) 0:00:02.377 ************ 2025-05-19 14:52:05.476295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:52:05.476334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.476435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476532 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:52:05.476569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.476975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.476996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477015 | orchestrator | 2025-05-19 14:52:05.477024 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 14:52:05.477033 | orchestrator | Monday 19 May 2025 14:48:46 +0000 (0:00:03.142) 0:00:05.520 ************ 2025-05-19 14:52:05.477042 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:52:05.477051 | orchestrator | 2025-05-19 14:52:05.477060 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-19 14:52:05.477068 | orchestrator | Monday 19 May 2025 14:48:47 +0000 (0:00:01.284) 0:00:06.804 ************ 2025-05-19 14:52:05.477077 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:52:05.477092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477136 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.477223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477242 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:52:05.477253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.477363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.477534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.478262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.478310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.478328 | orchestrator | 2025-05-19 14:52:05.478344 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-19 14:52:05.478362 | orchestrator | Monday 19 May 2025 14:48:53 +0000 (0:00:05.631) 0:00:12.436 ************ 2025-05-19 14:52:05.478379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 14:52:05.478625 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.478635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 14:52:05.478674 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.478787 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.478796 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.478805 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.478819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478857 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.478867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.478902 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.478913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.478977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479024 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.479161 | orchestrator | 2025-05-19 14:52:05.479179 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-19 14:52:05.479216 | orchestrator | Monday 19 May 2025 14:48:54 +0000 (0:00:01.365) 0:00:13.802 ************ 2025-05-19 14:52:05.479227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 14:52:05.479244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 14:52:05.479279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479288 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.479304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479362 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.479371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479427 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.479436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 14:52:05.479484 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.479498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479530 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.479539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479570 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.479579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 14:52:05.479588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 14:52:05.479616 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.479625 | orchestrator | 2025-05-19 14:52:05.479634 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-19 14:52:05.479643 | orchestrator | Monday 19 May 2025 14:48:56 +0000 (0:00:01.702) 0:00:15.505 ************ 2025-05-19 14:52:05.479652 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:52:05.479661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479719 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.479738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479834 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:52:05.479847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.479958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.479998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.480008 | orchestrator | 2025-05-19 14:52:05.480017 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-19 14:52:05.480026 | orchestrator | Monday 19 May 2025 14:49:01 +0000 (0:00:05.097) 0:00:20.602 ************ 2025-05-19 14:52:05.480035 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:52:05.480047 | orchestrator | 2025-05-19 14:52:05.480061 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-19 14:52:05.480075 | orchestrator | Monday 19 May 2025 14:49:02 +0000 (0:00:01.021) 0:00:21.624 ************ 2025-05-19 14:52:05.480084 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480094 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480116 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480126 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.480143 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480156 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480166 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480175 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1340018, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.938232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480184 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480197 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480211 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480221 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480234 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480253 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.480262 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1339999, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480288 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480297 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480306 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480320 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480330 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480338 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480351 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480367 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480751 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480776 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480786 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480806 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339986, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.929232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.480824 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480848 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480857 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480873 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480894 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480903 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.480917 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481072 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481091 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339989, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.481106 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481115 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481124 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481162 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481189 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481207 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481221 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481231 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481248 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481257 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481270 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481279 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1339997, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.481288 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481302 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481311 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481335 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481347 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481357 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481368 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481384 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481394 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481410 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481420 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481434 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481444 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481455 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339992, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.481470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481481 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481497 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481508 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481522 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481532 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481557 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481572 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481582 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481592 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481606 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481616 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481641 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481656 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481668 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.481678 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481689 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1339996, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.481703 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481714 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481723 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481737 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481751 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481760 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481769 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481782 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481791 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481799 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481827 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481836 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1340001, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.934232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.481845 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481860 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481878 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481891 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.481905 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481914 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.481923 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481965 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.481981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.481995 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.482009 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 14:52:05.482067 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.482082 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1340012, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.937232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482092 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1340039, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1340004, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.935232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339991, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9302318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482132 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1339995, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.932232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482141 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339984, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9282317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482150 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1339998, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9332318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482163 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1340038, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9422321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482172 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1339994, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9312317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482186 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1340021, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9392319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 14:52:05.482195 | orchestrator | 2025-05-19 14:52:05.482204 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-19 14:52:05.482213 | orchestrator | Monday 19 May 2025 14:49:22 +0000 (0:00:19.753) 0:00:41.377 ************ 2025-05-19 14:52:05.482222 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:52:05.482231 | orchestrator | 2025-05-19 14:52:05.482243 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-19 14:52:05.482252 | orchestrator | Monday 19 May 2025 14:49:23 +0000 (0:00:01.225) 0:00:42.602 ************ 2025-05-19 14:52:05.482261 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482270 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482279 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482296 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482305 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482314 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482322 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482339 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482348 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482357 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482365 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482383 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482391 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482408 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482425 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482434 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482451 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482460 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482468 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482477 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482486 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482494 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482516 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482525 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.482533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482542 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-19 14:52:05.482554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 14:52:05.482563 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-19 14:52:05.482572 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:52:05.482581 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:52:05.482589 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 14:52:05.482598 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 14:52:05.482606 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 14:52:05.482615 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:52:05.482623 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 14:52:05.482632 | orchestrator | 2025-05-19 14:52:05.482641 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-19 14:52:05.482649 | orchestrator | Monday 19 May 2025 14:49:25 +0000 (0:00:02.482) 0:00:45.085 ************ 2025-05-19 14:52:05.482658 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482667 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.482675 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482684 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.482693 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482701 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.482710 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482718 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.482727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482736 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.482744 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 14:52:05.482753 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.482761 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-19 14:52:05.482770 | orchestrator | 2025-05-19 14:52:05.482779 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-19 14:52:05.482787 | orchestrator | Monday 19 May 2025 14:49:41 +0000 (0:00:15.777) 0:01:00.862 ************ 2025-05-19 14:52:05.482796 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482809 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.482818 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482826 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.482835 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482843 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.482852 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482861 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.482869 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482878 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.482887 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 14:52:05.482900 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.482909 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-19 14:52:05.482918 | orchestrator | 2025-05-19 14:52:05.482968 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-19 14:52:05.482980 | orchestrator | Monday 19 May 2025 14:49:44 +0000 (0:00:02.987) 0:01:03.850 ************ 2025-05-19 14:52:05.482990 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.482999 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.483008 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.483017 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483025 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-19 14:52:05.483034 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483043 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483051 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.483060 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483069 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.483077 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483086 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 14:52:05.483095 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483103 | orchestrator | 2025-05-19 14:52:05.483116 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-19 14:52:05.483125 | orchestrator | Monday 19 May 2025 14:49:47 +0000 (0:00:02.510) 0:01:06.361 ************ 2025-05-19 14:52:05.483133 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:52:05.483142 | orchestrator | 2025-05-19 14:52:05.483150 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-19 14:52:05.483159 | orchestrator | Monday 19 May 2025 14:49:48 +0000 (0:00:01.341) 0:01:07.702 ************ 2025-05-19 14:52:05.483168 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.483176 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483185 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483193 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483202 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483210 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483219 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483227 | orchestrator | 2025-05-19 14:52:05.483236 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-19 14:52:05.483244 | orchestrator | Monday 19 May 2025 14:49:49 +0000 (0:00:00.661) 0:01:08.363 ************ 2025-05-19 14:52:05.483253 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.483262 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483270 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483279 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483287 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.483296 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.483304 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.483313 | orchestrator | 2025-05-19 14:52:05.483321 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-19 14:52:05.483330 | orchestrator | Monday 19 May 2025 14:49:51 +0000 (0:00:02.640) 0:01:11.004 ************ 2025-05-19 14:52:05.483345 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483354 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483362 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483371 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483379 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483388 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.483396 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483405 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483413 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483427 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483436 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483445 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483454 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 14:52:05.483462 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483471 | orchestrator | 2025-05-19 14:52:05.483479 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-19 14:52:05.483488 | orchestrator | Monday 19 May 2025 14:49:55 +0000 (0:00:03.724) 0:01:14.728 ************ 2025-05-19 14:52:05.483496 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-19 14:52:05.483505 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483514 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483522 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483531 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483540 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483548 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483557 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483565 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483574 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483583 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483591 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 14:52:05.483600 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483608 | orchestrator | 2025-05-19 14:52:05.483617 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-19 14:52:05.483625 | orchestrator | Monday 19 May 2025 14:49:58 +0000 (0:00:03.154) 0:01:17.882 ************ 2025-05-19 14:52:05.483634 | orchestrator | [WARNING]: Skipped 2025-05-19 14:52:05.483643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-19 14:52:05.483651 | orchestrator | due to this access issue: 2025-05-19 14:52:05.483660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-19 14:52:05.483669 | orchestrator | not a directory 2025-05-19 14:52:05.483677 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 14:52:05.483686 | orchestrator | 2025-05-19 14:52:05.483694 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-19 14:52:05.483703 | orchestrator | Monday 19 May 2025 14:49:59 +0000 (0:00:01.088) 0:01:18.971 ************ 2025-05-19 14:52:05.483717 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.483726 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483738 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483747 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483755 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483764 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483772 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483781 | orchestrator | 2025-05-19 14:52:05.483789 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-19 14:52:05.483798 | orchestrator | Monday 19 May 2025 14:50:01 +0000 (0:00:01.271) 0:01:20.243 ************ 2025-05-19 14:52:05.483807 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.483815 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:05.483824 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:05.483832 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:05.483840 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:05.483849 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:05.483857 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:05.483866 | orchestrator | 2025-05-19 14:52:05.483874 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-19 14:52:05.483883 | orchestrator | Monday 19 May 2025 14:50:02 +0000 (0:00:01.028) 0:01:21.272 ************ 2025-05-19 14:52:05.483892 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 14:52:05.483908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.483918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.483947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.483957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.483975 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.483984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.483993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.484002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 14:52:05.484026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484036 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 14:52:05.484055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 14:52:05.484206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 14:52:05.484242 | orchestrator | 2025-05-19 14:52:05.484251 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-19 14:52:05.484260 | orchestrator | Monday 19 May 2025 14:50:06 +0000 (0:00:04.081) 0:01:25.353 ************ 2025-05-19 14:52:05.484269 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 14:52:05.484277 | orchestrator | skipping: [testbed-manager] 2025-05-19 14:52:05.484286 | orchestrator | 2025-05-19 14:52:05.484294 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484303 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.953) 0:01:26.307 ************ 2025-05-19 14:52:05.484312 | orchestrator | 2025-05-19 14:52:05.484320 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484328 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.049) 0:01:26.356 ************ 2025-05-19 14:52:05.484337 | orchestrator | 2025-05-19 14:52:05.484345 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484354 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.047) 0:01:26.404 ************ 2025-05-19 14:52:05.484362 | orchestrator | 2025-05-19 14:52:05.484371 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484380 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.046) 0:01:26.451 ************ 2025-05-19 14:52:05.484388 | orchestrator | 2025-05-19 14:52:05.484397 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484405 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.303) 0:01:26.754 ************ 2025-05-19 14:52:05.484414 | orchestrator | 2025-05-19 14:52:05.484422 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484431 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.100) 0:01:26.854 ************ 2025-05-19 14:52:05.484439 | orchestrator | 2025-05-19 14:52:05.484448 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 14:52:05.484456 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.101) 0:01:26.956 ************ 2025-05-19 14:52:05.484465 | orchestrator | 2025-05-19 14:52:05.484473 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-19 14:52:05.484482 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:00.133) 0:01:27.090 ************ 2025-05-19 14:52:05.484490 | orchestrator | changed: [testbed-manager] 2025-05-19 14:52:05.484499 | orchestrator | 2025-05-19 14:52:05.484512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-19 14:52:05.484521 | orchestrator | Monday 19 May 2025 14:50:29 +0000 (0:00:21.614) 0:01:48.704 ************ 2025-05-19 14:52:05.484534 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.484543 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.484551 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:05.484560 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:05.484568 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.484577 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:05.484585 | orchestrator | changed: [testbed-manager] 2025-05-19 14:52:05.484594 | orchestrator | 2025-05-19 14:52:05.484603 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-19 14:52:05.484611 | orchestrator | Monday 19 May 2025 14:50:42 +0000 (0:00:12.967) 0:02:01.672 ************ 2025-05-19 14:52:05.484620 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.484628 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.484637 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.484645 | orchestrator | 2025-05-19 14:52:05.484654 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-19 14:52:05.484662 | orchestrator | Monday 19 May 2025 14:50:52 +0000 (0:00:10.253) 0:02:11.926 ************ 2025-05-19 14:52:05.484671 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.484679 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.484688 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.484696 | orchestrator | 2025-05-19 14:52:05.484705 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-19 14:52:05.484713 | orchestrator | Monday 19 May 2025 14:51:03 +0000 (0:00:11.047) 0:02:22.973 ************ 2025-05-19 14:52:05.484722 | orchestrator | changed: [testbed-manager] 2025-05-19 14:52:05.484730 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.484739 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:05.484747 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.484756 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:05.484764 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:05.484773 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.484781 | orchestrator | 2025-05-19 14:52:05.484790 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-19 14:52:05.484798 | orchestrator | Monday 19 May 2025 14:51:19 +0000 (0:00:15.982) 0:02:38.956 ************ 2025-05-19 14:52:05.484807 | orchestrator | changed: [testbed-manager] 2025-05-19 14:52:05.484816 | orchestrator | 2025-05-19 14:52:05.484824 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-19 14:52:05.484833 | orchestrator | Monday 19 May 2025 14:51:35 +0000 (0:00:15.586) 0:02:54.542 ************ 2025-05-19 14:52:05.484842 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:05.484850 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:05.484859 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:05.484867 | orchestrator | 2025-05-19 14:52:05.484876 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-19 14:52:05.484884 | orchestrator | Monday 19 May 2025 14:51:45 +0000 (0:00:10.582) 0:03:05.124 ************ 2025-05-19 14:52:05.484893 | orchestrator | changed: [testbed-manager] 2025-05-19 14:52:05.484902 | orchestrator | 2025-05-19 14:52:05.484910 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-19 14:52:05.484919 | orchestrator | Monday 19 May 2025 14:51:50 +0000 (0:00:04.761) 0:03:09.886 ************ 2025-05-19 14:52:05.484946 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:05.484955 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:05.484963 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:05.484972 | orchestrator | 2025-05-19 14:52:05.484981 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:52:05.484993 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 14:52:05.485007 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:52:05.485016 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:52:05.485025 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:52:05.485033 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 14:52:05.485042 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 14:52:05.485051 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 14:52:05.485059 | orchestrator | 2025-05-19 14:52:05.485068 | orchestrator | 2025-05-19 14:52:05.485077 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:52:05.485085 | orchestrator | Monday 19 May 2025 14:52:02 +0000 (0:00:12.186) 0:03:22.073 ************ 2025-05-19 14:52:05.485094 | orchestrator | =============================================================================== 2025-05-19 14:52:05.485103 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.61s 2025-05-19 14:52:05.485111 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 19.75s 2025-05-19 14:52:05.485120 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.98s 2025-05-19 14:52:05.485128 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.78s 2025-05-19 14:52:05.485137 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.59s 2025-05-19 14:52:05.485150 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.97s 2025-05-19 14:52:05.485159 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.19s 2025-05-19 14:52:05.485167 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.05s 2025-05-19 14:52:05.485176 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.58s 2025-05-19 14:52:05.485184 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.25s 2025-05-19 14:52:05.485193 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.63s 2025-05-19 14:52:05.485201 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.10s 2025-05-19 14:52:05.485210 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.76s 2025-05-19 14:52:05.485219 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.08s 2025-05-19 14:52:05.485227 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.72s 2025-05-19 14:52:05.485236 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.15s 2025-05-19 14:52:05.485244 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.14s 2025-05-19 14:52:05.485253 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.99s 2025-05-19 14:52:05.485261 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.64s 2025-05-19 14:52:05.485270 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.51s 2025-05-19 14:52:05.485278 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:05.485287 | orchestrator | 2025-05-19 14:52:05 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:05.485296 | orchestrator | 2025-05-19 14:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:08.532865 | orchestrator | 2025-05-19 14:52:08 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:08.533167 | orchestrator | 2025-05-19 14:52:08 | INFO  | Task a67a47e7-2dbd-4933-a118-e910a041c132 is in state SUCCESS 2025-05-19 14:52:08.534541 | orchestrator | 2025-05-19 14:52:08 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:08.535900 | orchestrator | 2025-05-19 14:52:08 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:08.538071 | orchestrator | 2025-05-19 14:52:08 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:08.538530 | orchestrator | 2025-05-19 14:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:11.585440 | orchestrator | 2025-05-19 14:52:11 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:11.587160 | orchestrator | 2025-05-19 14:52:11 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:11.588501 | orchestrator | 2025-05-19 14:52:11 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:11.590060 | orchestrator | 2025-05-19 14:52:11 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:11.590092 | orchestrator | 2025-05-19 14:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:14.634384 | orchestrator | 2025-05-19 14:52:14 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:14.635346 | orchestrator | 2025-05-19 14:52:14 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:14.635383 | orchestrator | 2025-05-19 14:52:14 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:14.636222 | orchestrator | 2025-05-19 14:52:14 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:14.636246 | orchestrator | 2025-05-19 14:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:17.673189 | orchestrator | 2025-05-19 14:52:17 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:17.674915 | orchestrator | 2025-05-19 14:52:17 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:17.675808 | orchestrator | 2025-05-19 14:52:17 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:17.676831 | orchestrator | 2025-05-19 14:52:17 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:17.676857 | orchestrator | 2025-05-19 14:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:20.710418 | orchestrator | 2025-05-19 14:52:20 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:20.710923 | orchestrator | 2025-05-19 14:52:20 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:20.712602 | orchestrator | 2025-05-19 14:52:20 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:20.712912 | orchestrator | 2025-05-19 14:52:20 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:20.712928 | orchestrator | 2025-05-19 14:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:23.763277 | orchestrator | 2025-05-19 14:52:23 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:23.763759 | orchestrator | 2025-05-19 14:52:23 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:23.763897 | orchestrator | 2025-05-19 14:52:23 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:23.768247 | orchestrator | 2025-05-19 14:52:23 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:23.768275 | orchestrator | 2025-05-19 14:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:26.802316 | orchestrator | 2025-05-19 14:52:26 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:26.805836 | orchestrator | 2025-05-19 14:52:26 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:26.808030 | orchestrator | 2025-05-19 14:52:26 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:26.808066 | orchestrator | 2025-05-19 14:52:26 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:26.808079 | orchestrator | 2025-05-19 14:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:29.840892 | orchestrator | 2025-05-19 14:52:29 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:29.845616 | orchestrator | 2025-05-19 14:52:29 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:29.846209 | orchestrator | 2025-05-19 14:52:29 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:29.846816 | orchestrator | 2025-05-19 14:52:29 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:29.846840 | orchestrator | 2025-05-19 14:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:32.873644 | orchestrator | 2025-05-19 14:52:32 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:32.875774 | orchestrator | 2025-05-19 14:52:32 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:32.876142 | orchestrator | 2025-05-19 14:52:32 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:32.877453 | orchestrator | 2025-05-19 14:52:32 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:32.877481 | orchestrator | 2025-05-19 14:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:35.910425 | orchestrator | 2025-05-19 14:52:35 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:35.910782 | orchestrator | 2025-05-19 14:52:35 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:35.911837 | orchestrator | 2025-05-19 14:52:35 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:35.913176 | orchestrator | 2025-05-19 14:52:35 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:35.913198 | orchestrator | 2025-05-19 14:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:38.946333 | orchestrator | 2025-05-19 14:52:38 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:38.946429 | orchestrator | 2025-05-19 14:52:38 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:38.946743 | orchestrator | 2025-05-19 14:52:38 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:38.950359 | orchestrator | 2025-05-19 14:52:38 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:38.950404 | orchestrator | 2025-05-19 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:41.976803 | orchestrator | 2025-05-19 14:52:41 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:41.976902 | orchestrator | 2025-05-19 14:52:41 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:41.976918 | orchestrator | 2025-05-19 14:52:41 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:41.978403 | orchestrator | 2025-05-19 14:52:41 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:41.978431 | orchestrator | 2025-05-19 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:45.006663 | orchestrator | 2025-05-19 14:52:45 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:45.007085 | orchestrator | 2025-05-19 14:52:45 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:45.008623 | orchestrator | 2025-05-19 14:52:45 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:45.009611 | orchestrator | 2025-05-19 14:52:45 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:45.009634 | orchestrator | 2025-05-19 14:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:48.046415 | orchestrator | 2025-05-19 14:52:48 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:48.046692 | orchestrator | 2025-05-19 14:52:48 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:48.047281 | orchestrator | 2025-05-19 14:52:48 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:48.048000 | orchestrator | 2025-05-19 14:52:48 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:48.048023 | orchestrator | 2025-05-19 14:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:51.077562 | orchestrator | 2025-05-19 14:52:51 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:51.078179 | orchestrator | 2025-05-19 14:52:51 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:51.079223 | orchestrator | 2025-05-19 14:52:51 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:51.080167 | orchestrator | 2025-05-19 14:52:51 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state STARTED 2025-05-19 14:52:51.080191 | orchestrator | 2025-05-19 14:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:54.107011 | orchestrator | 2025-05-19 14:52:54 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:52:54.107210 | orchestrator | 2025-05-19 14:52:54 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:54.108241 | orchestrator | 2025-05-19 14:52:54 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:54.110820 | orchestrator | 2025-05-19 14:52:54 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:54.112757 | orchestrator | 2025-05-19 14:52:54 | INFO  | Task 2b254e86-a618-4658-8dc7-54d8146270de is in state SUCCESS 2025-05-19 14:52:54.114304 | orchestrator | 2025-05-19 14:52:54.114349 | orchestrator | None 2025-05-19 14:52:54.114362 | orchestrator | 2025-05-19 14:52:54.114374 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:52:54.114386 | orchestrator | 2025-05-19 14:52:54.114397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:52:54.114408 | orchestrator | Monday 19 May 2025 14:48:54 +0000 (0:00:00.428) 0:00:00.428 ************ 2025-05-19 14:52:54.114419 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:52:54.114452 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:52:54.114463 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:52:54.114473 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:52:54.114484 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:52:54.114494 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:52:54.114505 | orchestrator | 2025-05-19 14:52:54.114599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:52:54.114612 | orchestrator | Monday 19 May 2025 14:48:54 +0000 (0:00:00.466) 0:00:00.894 ************ 2025-05-19 14:52:54.114685 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-19 14:52:54.114697 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-19 14:52:54.114708 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-19 14:52:54.114719 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-19 14:52:54.114729 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-19 14:52:54.114740 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-19 14:52:54.114751 | orchestrator | 2025-05-19 14:52:54.114800 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-19 14:52:54.114813 | orchestrator | 2025-05-19 14:52:54.114824 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 14:52:54.114834 | orchestrator | Monday 19 May 2025 14:48:55 +0000 (0:00:00.619) 0:00:01.514 ************ 2025-05-19 14:52:54.114845 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:52:54.114857 | orchestrator | 2025-05-19 14:52:54.114868 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-19 14:52:54.114879 | orchestrator | Monday 19 May 2025 14:48:56 +0000 (0:00:00.893) 0:00:02.407 ************ 2025-05-19 14:52:54.114891 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-19 14:52:54.114903 | orchestrator | 2025-05-19 14:52:54.114915 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-19 14:52:54.114928 | orchestrator | Monday 19 May 2025 14:48:58 +0000 (0:00:02.822) 0:00:05.230 ************ 2025-05-19 14:52:54.114940 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-19 14:52:54.114952 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-19 14:52:54.114983 | orchestrator | 2025-05-19 14:52:54.114996 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-19 14:52:54.115008 | orchestrator | Monday 19 May 2025 14:49:04 +0000 (0:00:05.473) 0:00:10.704 ************ 2025-05-19 14:52:54.115020 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:52:54.115033 | orchestrator | 2025-05-19 14:52:54.115043 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-19 14:52:54.115054 | orchestrator | Monday 19 May 2025 14:49:07 +0000 (0:00:03.008) 0:00:13.712 ************ 2025-05-19 14:52:54.115065 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:52:54.115075 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-19 14:52:54.115086 | orchestrator | 2025-05-19 14:52:54.115096 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-19 14:52:54.115107 | orchestrator | Monday 19 May 2025 14:49:11 +0000 (0:00:04.000) 0:00:17.713 ************ 2025-05-19 14:52:54.115117 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:52:54.115128 | orchestrator | 2025-05-19 14:52:54.115138 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-19 14:52:54.115149 | orchestrator | Monday 19 May 2025 14:49:14 +0000 (0:00:03.290) 0:00:21.004 ************ 2025-05-19 14:52:54.115190 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-19 14:52:54.115202 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-19 14:52:54.115221 | orchestrator | 2025-05-19 14:52:54.115290 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-19 14:52:54.115302 | orchestrator | Monday 19 May 2025 14:49:22 +0000 (0:00:07.980) 0:00:28.985 ************ 2025-05-19 14:52:54.115342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.115359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.115371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.115409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.115549 | orchestrator | 2025-05-19 14:52:54.115566 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 14:52:54.115577 | orchestrator | Monday 19 May 2025 14:49:25 +0000 (0:00:02.787) 0:00:31.773 ************ 2025-05-19 14:52:54.115588 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.115599 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.115610 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.115621 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.115631 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.115642 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.115653 | orchestrator | 2025-05-19 14:52:54.115664 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 14:52:54.115674 | orchestrator | Monday 19 May 2025 14:49:26 +0000 (0:00:00.655) 0:00:32.428 ************ 2025-05-19 14:52:54.115685 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.115696 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.115707 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.115717 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:52:54.115728 | orchestrator | 2025-05-19 14:52:54.115739 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-19 14:52:54.115750 | orchestrator | Monday 19 May 2025 14:49:27 +0000 (0:00:01.753) 0:00:34.182 ************ 2025-05-19 14:52:54.115761 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-19 14:52:54.115772 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-19 14:52:54.115783 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-19 14:52:54.115793 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-19 14:52:54.115804 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-19 14:52:54.115815 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-19 14:52:54.115826 | orchestrator | 2025-05-19 14:52:54.115836 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-19 14:52:54.115847 | orchestrator | Monday 19 May 2025 14:49:29 +0000 (0:00:01.615) 0:00:35.797 ************ 2025-05-19 14:52:54.115860 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115878 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115894 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115912 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115924 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115935 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 14:52:54.115952 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.115986 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.116005 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.116017 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.116036 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.116048 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 14:52:54.116059 | orchestrator | 2025-05-19 14:52:54.116070 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-19 14:52:54.116081 | orchestrator | Monday 19 May 2025 14:49:32 +0000 (0:00:03.063) 0:00:38.861 ************ 2025-05-19 14:52:54.116092 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:52:54.116104 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:52:54.116114 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 14:52:54.116125 | orchestrator | 2025-05-19 14:52:54.116136 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-19 14:52:54.116147 | orchestrator | Monday 19 May 2025 14:49:34 +0000 (0:00:01.770) 0:00:40.632 ************ 2025-05-19 14:52:54.116184 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-19 14:52:54.116196 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-19 14:52:54.116207 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-19 14:52:54.116218 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:52:54.116228 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:52:54.116245 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 14:52:54.116256 | orchestrator | 2025-05-19 14:52:54.116267 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-19 14:52:54.116278 | orchestrator | Monday 19 May 2025 14:49:37 +0000 (0:00:02.901) 0:00:43.533 ************ 2025-05-19 14:52:54.116288 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-19 14:52:54.116299 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-19 14:52:54.116310 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-19 14:52:54.116321 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-19 14:52:54.116332 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-19 14:52:54.116343 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-19 14:52:54.116353 | orchestrator | 2025-05-19 14:52:54.116364 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-19 14:52:54.116387 | orchestrator | Monday 19 May 2025 14:49:38 +0000 (0:00:01.056) 0:00:44.589 ************ 2025-05-19 14:52:54.116398 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.116408 | orchestrator | 2025-05-19 14:52:54.116419 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-19 14:52:54.116430 | orchestrator | Monday 19 May 2025 14:49:38 +0000 (0:00:00.155) 0:00:44.745 ************ 2025-05-19 14:52:54.116440 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.116451 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.116461 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.116472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.116482 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.116493 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.116503 | orchestrator | 2025-05-19 14:52:54.116514 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 14:52:54.116525 | orchestrator | Monday 19 May 2025 14:49:39 +0000 (0:00:00.640) 0:00:45.385 ************ 2025-05-19 14:52:54.116536 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:52:54.116548 | orchestrator | 2025-05-19 14:52:54.116559 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-19 14:52:54.116569 | orchestrator | Monday 19 May 2025 14:49:40 +0000 (0:00:01.015) 0:00:46.401 ************ 2025-05-19 14:52:54.116581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.116593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.116615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.116634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.116646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.116657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.116669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.116685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.117288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.117329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.117341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.117353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.117364 | orchestrator | 2025-05-19 14:52:54.117375 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-19 14:52:54.117386 | orchestrator | Monday 19 May 2025 14:49:43 +0000 (0:00:03.359) 0:00:49.760 ************ 2025-05-19 14:52:54.117398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117488 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.117500 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.117511 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.117526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117562 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.117574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117596 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.117607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117665 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.117677 | orchestrator | 2025-05-19 14:52:54.117688 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-19 14:52:54.117699 | orchestrator | Monday 19 May 2025 14:49:44 +0000 (0:00:01.013) 0:00:50.774 ************ 2025-05-19 14:52:54.117717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117763 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.117777 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.117805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.117845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117867 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.117888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117922 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.117936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.117994 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.118060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.118077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.118088 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.118099 | orchestrator | 2025-05-19 14:52:54.118110 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-19 14:52:54.118121 | orchestrator | Monday 19 May 2025 14:49:46 +0000 (0:00:02.444) 0:00:53.218 ************ 2025-05-19 14:52:54.118132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118310 | orchestrator | 2025-05-19 14:52:54.118321 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-19 14:52:54.118332 | orchestrator | Monday 19 May 2025 14:49:50 +0000 (0:00:03.175) 0:00:56.394 ************ 2025-05-19 14:52:54.118349 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 14:52:54.118360 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.118371 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 14:52:54.118381 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.118392 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 14:52:54.118403 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.118414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 14:52:54.118424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 14:52:54.118435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 14:52:54.118445 | orchestrator | 2025-05-19 14:52:54.118456 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-19 14:52:54.118467 | orchestrator | Monday 19 May 2025 14:49:52 +0000 (0:00:02.316) 0:00:58.711 ************ 2025-05-19 14:52:54.118482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.118541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.118652 | orchestrator | 2025-05-19 14:52:54.118663 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-19 14:52:54.118674 | orchestrator | Monday 19 May 2025 14:50:04 +0000 (0:00:12.158) 0:01:10.869 ************ 2025-05-19 14:52:54.118690 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.118701 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.118712 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.118723 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:54.118734 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:54.118745 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:54.118755 | orchestrator | 2025-05-19 14:52:54.118766 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-19 14:52:54.118777 | orchestrator | Monday 19 May 2025 14:50:07 +0000 (0:00:02.490) 0:01:13.360 ************ 2025-05-19 14:52:54.118788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.118805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.118818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.118838 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.118858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.118877 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.118914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 14:52:54.118935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.118950 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.118995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119019 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.119030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119057 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.119077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 14:52:54.119110 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.119121 | orchestrator | 2025-05-19 14:52:54.119132 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-19 14:52:54.119143 | orchestrator | Monday 19 May 2025 14:50:08 +0000 (0:00:01.279) 0:01:14.639 ************ 2025-05-19 14:52:54.119154 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.119164 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.119175 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.119186 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.119196 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.119207 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.119217 | orchestrator | 2025-05-19 14:52:54.119228 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-19 14:52:54.119239 | orchestrator | Monday 19 May 2025 14:50:09 +0000 (0:00:00.630) 0:01:15.270 ************ 2025-05-19 14:52:54.119250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.119269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.119288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 14:52:54.119306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 14:52:54.119428 | orchestrator | 2025-05-19 14:52:54.119439 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 14:52:54.119449 | orchestrator | Monday 19 May 2025 14:50:11 +0000 (0:00:02.177) 0:01:17.448 ************ 2025-05-19 14:52:54.119460 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.119471 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:52:54.119482 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:52:54.119492 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:52:54.119503 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:52:54.119514 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:52:54.119524 | orchestrator | 2025-05-19 14:52:54.119535 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-19 14:52:54.119545 | orchestrator | Monday 19 May 2025 14:50:11 +0000 (0:00:00.637) 0:01:18.085 ************ 2025-05-19 14:52:54.119556 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:54.119566 | orchestrator | 2025-05-19 14:52:54.119577 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-19 14:52:54.119593 | orchestrator | Monday 19 May 2025 14:50:13 +0000 (0:00:01.712) 0:01:19.798 ************ 2025-05-19 14:52:54.119608 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:54.119619 | orchestrator | 2025-05-19 14:52:54.119630 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-19 14:52:54.119641 | orchestrator | Monday 19 May 2025 14:50:15 +0000 (0:00:01.899) 0:01:21.698 ************ 2025-05-19 14:52:54.119651 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:54.119662 | orchestrator | 2025-05-19 14:52:54.119672 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119683 | orchestrator | Monday 19 May 2025 14:50:31 +0000 (0:00:15.933) 0:01:37.631 ************ 2025-05-19 14:52:54.119694 | orchestrator | 2025-05-19 14:52:54.119710 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119721 | orchestrator | Monday 19 May 2025 14:50:31 +0000 (0:00:00.154) 0:01:37.786 ************ 2025-05-19 14:52:54.119732 | orchestrator | 2025-05-19 14:52:54.119743 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119754 | orchestrator | Monday 19 May 2025 14:50:31 +0000 (0:00:00.146) 0:01:37.933 ************ 2025-05-19 14:52:54.119764 | orchestrator | 2025-05-19 14:52:54.119775 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119786 | orchestrator | Monday 19 May 2025 14:50:31 +0000 (0:00:00.129) 0:01:38.063 ************ 2025-05-19 14:52:54.119797 | orchestrator | 2025-05-19 14:52:54.119807 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119818 | orchestrator | Monday 19 May 2025 14:50:31 +0000 (0:00:00.131) 0:01:38.195 ************ 2025-05-19 14:52:54.119829 | orchestrator | 2025-05-19 14:52:54.119840 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 14:52:54.119850 | orchestrator | Monday 19 May 2025 14:50:32 +0000 (0:00:00.152) 0:01:38.347 ************ 2025-05-19 14:52:54.119861 | orchestrator | 2025-05-19 14:52:54.119873 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-19 14:52:54.119893 | orchestrator | Monday 19 May 2025 14:50:32 +0000 (0:00:00.154) 0:01:38.502 ************ 2025-05-19 14:52:54.119912 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:54.119931 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:54.119952 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:54.120019 | orchestrator | 2025-05-19 14:52:54.120032 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-19 14:52:54.120043 | orchestrator | Monday 19 May 2025 14:51:01 +0000 (0:00:29.137) 0:02:07.639 ************ 2025-05-19 14:52:54.120053 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:52:54.120064 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:52:54.120075 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:52:54.120086 | orchestrator | 2025-05-19 14:52:54.120097 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-19 14:52:54.120108 | orchestrator | Monday 19 May 2025 14:51:11 +0000 (0:00:10.146) 0:02:17.786 ************ 2025-05-19 14:52:54.120118 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:54.120129 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:54.120140 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:54.120151 | orchestrator | 2025-05-19 14:52:54.120162 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-19 14:52:54.120172 | orchestrator | Monday 19 May 2025 14:52:36 +0000 (0:01:25.210) 0:03:42.996 ************ 2025-05-19 14:52:54.120183 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:52:54.120194 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:52:54.120205 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:52:54.120215 | orchestrator | 2025-05-19 14:52:54.120226 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-19 14:52:54.120237 | orchestrator | Monday 19 May 2025 14:52:51 +0000 (0:00:14.849) 0:03:57.846 ************ 2025-05-19 14:52:54.120256 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:52:54.120267 | orchestrator | 2025-05-19 14:52:54.120278 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:52:54.120289 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 14:52:54.120301 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 14:52:54.120312 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 14:52:54.120322 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 14:52:54.120333 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 14:52:54.120344 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 14:52:54.120355 | orchestrator | 2025-05-19 14:52:54.120366 | orchestrator | 2025-05-19 14:52:54.120377 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:52:54.120387 | orchestrator | Monday 19 May 2025 14:52:52 +0000 (0:00:00.675) 0:03:58.521 ************ 2025-05-19 14:52:54.120398 | orchestrator | =============================================================================== 2025-05-19 14:52:54.120409 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 85.21s 2025-05-19 14:52:54.120419 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.14s 2025-05-19 14:52:54.120428 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 15.93s 2025-05-19 14:52:54.120442 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 14.85s 2025-05-19 14:52:54.120452 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.16s 2025-05-19 14:52:54.120462 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.15s 2025-05-19 14:52:54.120471 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.98s 2025-05-19 14:52:54.120481 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.47s 2025-05-19 14:52:54.120497 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.00s 2025-05-19 14:52:54.120507 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.36s 2025-05-19 14:52:54.120517 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.29s 2025-05-19 14:52:54.120526 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.18s 2025-05-19 14:52:54.120535 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.06s 2025-05-19 14:52:54.120545 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.01s 2025-05-19 14:52:54.120554 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.90s 2025-05-19 14:52:54.120564 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.82s 2025-05-19 14:52:54.120573 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.79s 2025-05-19 14:52:54.120583 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.49s 2025-05-19 14:52:54.120592 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.44s 2025-05-19 14:52:54.120602 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.32s 2025-05-19 14:52:54.120611 | orchestrator | 2025-05-19 14:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:52:57.153789 | orchestrator | 2025-05-19 14:52:57 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:52:57.154347 | orchestrator | 2025-05-19 14:52:57 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:52:57.154752 | orchestrator | 2025-05-19 14:52:57 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:52:57.155519 | orchestrator | 2025-05-19 14:52:57 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:52:57.155555 | orchestrator | 2025-05-19 14:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:00.182114 | orchestrator | 2025-05-19 14:53:00 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:00.182204 | orchestrator | 2025-05-19 14:53:00 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:00.182512 | orchestrator | 2025-05-19 14:53:00 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:00.183074 | orchestrator | 2025-05-19 14:53:00 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:00.183098 | orchestrator | 2025-05-19 14:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:03.210081 | orchestrator | 2025-05-19 14:53:03 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:03.210284 | orchestrator | 2025-05-19 14:53:03 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:03.212184 | orchestrator | 2025-05-19 14:53:03 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:03.213447 | orchestrator | 2025-05-19 14:53:03 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:03.213480 | orchestrator | 2025-05-19 14:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:06.239618 | orchestrator | 2025-05-19 14:53:06 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:06.240652 | orchestrator | 2025-05-19 14:53:06 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:06.241327 | orchestrator | 2025-05-19 14:53:06 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:06.241898 | orchestrator | 2025-05-19 14:53:06 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:06.242084 | orchestrator | 2025-05-19 14:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:09.266423 | orchestrator | 2025-05-19 14:53:09 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:09.266582 | orchestrator | 2025-05-19 14:53:09 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:09.267077 | orchestrator | 2025-05-19 14:53:09 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:09.267842 | orchestrator | 2025-05-19 14:53:09 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:09.267869 | orchestrator | 2025-05-19 14:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:12.300547 | orchestrator | 2025-05-19 14:53:12 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:12.302280 | orchestrator | 2025-05-19 14:53:12 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:12.302738 | orchestrator | 2025-05-19 14:53:12 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:12.303577 | orchestrator | 2025-05-19 14:53:12 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:12.303619 | orchestrator | 2025-05-19 14:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:15.332177 | orchestrator | 2025-05-19 14:53:15 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:15.332443 | orchestrator | 2025-05-19 14:53:15 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:15.333193 | orchestrator | 2025-05-19 14:53:15 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:15.334476 | orchestrator | 2025-05-19 14:53:15 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:15.334543 | orchestrator | 2025-05-19 14:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:18.365820 | orchestrator | 2025-05-19 14:53:18 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:18.366128 | orchestrator | 2025-05-19 14:53:18 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:18.366508 | orchestrator | 2025-05-19 14:53:18 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:18.367138 | orchestrator | 2025-05-19 14:53:18 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:18.367310 | orchestrator | 2025-05-19 14:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:21.409521 | orchestrator | 2025-05-19 14:53:21 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:21.412011 | orchestrator | 2025-05-19 14:53:21 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:21.412435 | orchestrator | 2025-05-19 14:53:21 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:21.413013 | orchestrator | 2025-05-19 14:53:21 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:21.413040 | orchestrator | 2025-05-19 14:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:24.438455 | orchestrator | 2025-05-19 14:53:24 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:24.438545 | orchestrator | 2025-05-19 14:53:24 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:24.438559 | orchestrator | 2025-05-19 14:53:24 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:24.439245 | orchestrator | 2025-05-19 14:53:24 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:24.439275 | orchestrator | 2025-05-19 14:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:27.465154 | orchestrator | 2025-05-19 14:53:27 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:27.465247 | orchestrator | 2025-05-19 14:53:27 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:27.465261 | orchestrator | 2025-05-19 14:53:27 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:27.465273 | orchestrator | 2025-05-19 14:53:27 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:27.465284 | orchestrator | 2025-05-19 14:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:30.486852 | orchestrator | 2025-05-19 14:53:30 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:30.487103 | orchestrator | 2025-05-19 14:53:30 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:30.487612 | orchestrator | 2025-05-19 14:53:30 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:30.488188 | orchestrator | 2025-05-19 14:53:30 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:30.488212 | orchestrator | 2025-05-19 14:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:33.513869 | orchestrator | 2025-05-19 14:53:33 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:33.513955 | orchestrator | 2025-05-19 14:53:33 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:33.513970 | orchestrator | 2025-05-19 14:53:33 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:33.514214 | orchestrator | 2025-05-19 14:53:33 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:33.514238 | orchestrator | 2025-05-19 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:36.546604 | orchestrator | 2025-05-19 14:53:36 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:36.546800 | orchestrator | 2025-05-19 14:53:36 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:36.548052 | orchestrator | 2025-05-19 14:53:36 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:36.548078 | orchestrator | 2025-05-19 14:53:36 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:36.548090 | orchestrator | 2025-05-19 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:39.577564 | orchestrator | 2025-05-19 14:53:39 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:39.577701 | orchestrator | 2025-05-19 14:53:39 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:39.577781 | orchestrator | 2025-05-19 14:53:39 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:39.579195 | orchestrator | 2025-05-19 14:53:39 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:39.579225 | orchestrator | 2025-05-19 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:42.606913 | orchestrator | 2025-05-19 14:53:42 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:42.607054 | orchestrator | 2025-05-19 14:53:42 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:42.608557 | orchestrator | 2025-05-19 14:53:42 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:42.608583 | orchestrator | 2025-05-19 14:53:42 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:42.608594 | orchestrator | 2025-05-19 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:45.636146 | orchestrator | 2025-05-19 14:53:45 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:45.637141 | orchestrator | 2025-05-19 14:53:45 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:45.637184 | orchestrator | 2025-05-19 14:53:45 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:45.638336 | orchestrator | 2025-05-19 14:53:45 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:45.638374 | orchestrator | 2025-05-19 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:48.666857 | orchestrator | 2025-05-19 14:53:48 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:48.667137 | orchestrator | 2025-05-19 14:53:48 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:48.669269 | orchestrator | 2025-05-19 14:53:48 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:48.669736 | orchestrator | 2025-05-19 14:53:48 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:48.669803 | orchestrator | 2025-05-19 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:51.711659 | orchestrator | 2025-05-19 14:53:51 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:51.714662 | orchestrator | 2025-05-19 14:53:51 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:51.714726 | orchestrator | 2025-05-19 14:53:51 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:51.714741 | orchestrator | 2025-05-19 14:53:51 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:51.714769 | orchestrator | 2025-05-19 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:54.737826 | orchestrator | 2025-05-19 14:53:54 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:54.738191 | orchestrator | 2025-05-19 14:53:54 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:54.738649 | orchestrator | 2025-05-19 14:53:54 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:54.740134 | orchestrator | 2025-05-19 14:53:54 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:54.740162 | orchestrator | 2025-05-19 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:53:57.764181 | orchestrator | 2025-05-19 14:53:57 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:53:57.764711 | orchestrator | 2025-05-19 14:53:57 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:53:57.765656 | orchestrator | 2025-05-19 14:53:57 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:53:57.766732 | orchestrator | 2025-05-19 14:53:57 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:53:57.766793 | orchestrator | 2025-05-19 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:00.806908 | orchestrator | 2025-05-19 14:54:00 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:00.809031 | orchestrator | 2025-05-19 14:54:00 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:00.810816 | orchestrator | 2025-05-19 14:54:00 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state STARTED 2025-05-19 14:54:00.812122 | orchestrator | 2025-05-19 14:54:00 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:00.812235 | orchestrator | 2025-05-19 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:03.839595 | orchestrator | 2025-05-19 14:54:03 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:03.839915 | orchestrator | 2025-05-19 14:54:03 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:03.841148 | orchestrator | 2025-05-19 14:54:03 | INFO  | Task 8b85e685-039e-4681-a402-e49d731296e3 is in state SUCCESS 2025-05-19 14:54:03.841255 | orchestrator | 2025-05-19 14:54:03.842664 | orchestrator | 2025-05-19 14:54:03.842698 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:54:03.842765 | orchestrator | 2025-05-19 14:54:03.842780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:54:03.842793 | orchestrator | Monday 19 May 2025 14:52:06 +0000 (0:00:00.230) 0:00:00.230 ************ 2025-05-19 14:54:03.842804 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:54:03.842852 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:54:03.842864 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:54:03.842875 | orchestrator | 2025-05-19 14:54:03.842887 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:54:03.842897 | orchestrator | Monday 19 May 2025 14:52:07 +0000 (0:00:00.259) 0:00:00.489 ************ 2025-05-19 14:54:03.842908 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-19 14:54:03.842920 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-19 14:54:03.842931 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-19 14:54:03.842941 | orchestrator | 2025-05-19 14:54:03.842952 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-19 14:54:03.842987 | orchestrator | 2025-05-19 14:54:03.843000 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 14:54:03.843053 | orchestrator | Monday 19 May 2025 14:52:07 +0000 (0:00:00.337) 0:00:00.827 ************ 2025-05-19 14:54:03.843065 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:54:03.843077 | orchestrator | 2025-05-19 14:54:03.843088 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-19 14:54:03.843099 | orchestrator | Monday 19 May 2025 14:52:07 +0000 (0:00:00.462) 0:00:01.289 ************ 2025-05-19 14:54:03.843110 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-19 14:54:03.843121 | orchestrator | 2025-05-19 14:54:03.843132 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-19 14:54:03.843143 | orchestrator | Monday 19 May 2025 14:52:10 +0000 (0:00:03.017) 0:00:04.306 ************ 2025-05-19 14:54:03.843153 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-19 14:54:03.843164 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-19 14:54:03.843175 | orchestrator | 2025-05-19 14:54:03.843186 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-19 14:54:03.843197 | orchestrator | Monday 19 May 2025 14:52:16 +0000 (0:00:06.029) 0:00:10.336 ************ 2025-05-19 14:54:03.843207 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:54:03.843218 | orchestrator | 2025-05-19 14:54:03.843229 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-19 14:54:03.843254 | orchestrator | Monday 19 May 2025 14:52:19 +0000 (0:00:03.041) 0:00:13.377 ************ 2025-05-19 14:54:03.843265 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:54:03.843276 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-19 14:54:03.843287 | orchestrator | 2025-05-19 14:54:03.843298 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-19 14:54:03.843308 | orchestrator | Monday 19 May 2025 14:52:23 +0000 (0:00:03.709) 0:00:17.086 ************ 2025-05-19 14:54:03.843319 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:54:03.843330 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-19 14:54:03.843341 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-19 14:54:03.843352 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-19 14:54:03.843363 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-19 14:54:03.843373 | orchestrator | 2025-05-19 14:54:03.843384 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-19 14:54:03.843395 | orchestrator | Monday 19 May 2025 14:52:39 +0000 (0:00:15.416) 0:00:32.502 ************ 2025-05-19 14:54:03.843414 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-19 14:54:03.843425 | orchestrator | 2025-05-19 14:54:03.843436 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-19 14:54:03.843446 | orchestrator | Monday 19 May 2025 14:52:42 +0000 (0:00:03.786) 0:00:36.289 ************ 2025-05-19 14:54:03.843461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843606 | orchestrator | 2025-05-19 14:54:03.843617 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-19 14:54:03.843628 | orchestrator | Monday 19 May 2025 14:52:44 +0000 (0:00:01.791) 0:00:38.081 ************ 2025-05-19 14:54:03.843639 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-19 14:54:03.843650 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-19 14:54:03.843660 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-19 14:54:03.843671 | orchestrator | 2025-05-19 14:54:03.843682 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-19 14:54:03.843692 | orchestrator | Monday 19 May 2025 14:52:46 +0000 (0:00:01.438) 0:00:39.519 ************ 2025-05-19 14:54:03.843703 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.843714 | orchestrator | 2025-05-19 14:54:03.843725 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-19 14:54:03.843740 | orchestrator | Monday 19 May 2025 14:52:46 +0000 (0:00:00.099) 0:00:39.619 ************ 2025-05-19 14:54:03.843757 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.843768 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.843779 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.843790 | orchestrator | 2025-05-19 14:54:03.843800 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 14:54:03.843811 | orchestrator | Monday 19 May 2025 14:52:46 +0000 (0:00:00.349) 0:00:39.968 ************ 2025-05-19 14:54:03.843822 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:54:03.843832 | orchestrator | 2025-05-19 14:54:03.843843 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-19 14:54:03.843854 | orchestrator | Monday 19 May 2025 14:52:47 +0000 (0:00:00.815) 0:00:40.784 ************ 2025-05-19 14:54:03.843893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.843939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.843999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844071 | orchestrator | 2025-05-19 14:54:03.844082 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-19 14:54:03.844093 | orchestrator | Monday 19 May 2025 14:52:51 +0000 (0:00:03.916) 0:00:44.701 ************ 2025-05-19 14:54:03.844109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844152 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.844169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844211 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.844228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844262 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.844274 | orchestrator | 2025-05-19 14:54:03.844285 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-19 14:54:03.844296 | orchestrator | Monday 19 May 2025 14:52:52 +0000 (0:00:01.151) 0:00:45.852 ************ 2025-05-19 14:54:03.844315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844356 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.844379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.844432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.844450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.844478 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.844489 | orchestrator | 2025-05-19 14:54:03.844500 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-19 14:54:03.844511 | orchestrator | Monday 19 May 2025 14:52:52 +0000 (0:00:00.553) 0:00:46.405 ************ 2025-05-19 14:54:03.844523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.844775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.844790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.844809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.844896 | orchestrator | 2025-05-19 14:54:03.844907 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-19 14:54:03.844918 | orchestrator | Monday 19 May 2025 14:52:56 +0000 (0:00:03.239) 0:00:49.645 ************ 2025-05-19 14:54:03.844929 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.844940 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:54:03.844950 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:54:03.844961 | orchestrator | 2025-05-19 14:54:03.844972 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-19 14:54:03.844982 | orchestrator | Monday 19 May 2025 14:52:58 +0000 (0:00:02.513) 0:00:52.161 ************ 2025-05-19 14:54:03.844993 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:54:03.845022 | orchestrator | 2025-05-19 14:54:03.845033 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-19 14:54:03.845044 | orchestrator | Monday 19 May 2025 14:52:59 +0000 (0:00:01.318) 0:00:53.480 ************ 2025-05-19 14:54:03.845055 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.845065 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.845076 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.845086 | orchestrator | 2025-05-19 14:54:03.845097 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-19 14:54:03.845107 | orchestrator | Monday 19 May 2025 14:53:00 +0000 (0:00:00.597) 0:00:54.078 ************ 2025-05-19 14:54:03.845123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845252 | orchestrator | 2025-05-19 14:54:03.845263 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-19 14:54:03.845274 | orchestrator | Monday 19 May 2025 14:53:07 +0000 (0:00:06.795) 0:01:00.873 ************ 2025-05-19 14:54:03.845292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.845304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845331 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.845343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.845354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845388 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.845400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 14:54:03.845412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:54:03.845438 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.845449 | orchestrator | 2025-05-19 14:54:03.845460 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-19 14:54:03.845471 | orchestrator | Monday 19 May 2025 14:53:08 +0000 (0:00:00.947) 0:01:01.821 ************ 2025-05-19 14:54:03.845482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 14:54:03.845534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:54:03.845619 | orchestrator | 2025-05-19 14:54:03.845630 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 14:54:03.845641 | orchestrator | Monday 19 May 2025 14:53:11 +0000 (0:00:02.940) 0:01:04.761 ************ 2025-05-19 14:54:03.845652 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:54:03.845663 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:54:03.845673 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:54:03.845684 | orchestrator | 2025-05-19 14:54:03.845694 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-19 14:54:03.845705 | orchestrator | Monday 19 May 2025 14:53:11 +0000 (0:00:00.307) 0:01:05.069 ************ 2025-05-19 14:54:03.845715 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.845726 | orchestrator | 2025-05-19 14:54:03.845736 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-19 14:54:03.845747 | orchestrator | Monday 19 May 2025 14:53:13 +0000 (0:00:02.038) 0:01:07.108 ************ 2025-05-19 14:54:03.845757 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.845768 | orchestrator | 2025-05-19 14:54:03.845778 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-19 14:54:03.845789 | orchestrator | Monday 19 May 2025 14:53:15 +0000 (0:00:02.296) 0:01:09.405 ************ 2025-05-19 14:54:03.845799 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.845810 | orchestrator | 2025-05-19 14:54:03.845821 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 14:54:03.845836 | orchestrator | Monday 19 May 2025 14:53:26 +0000 (0:00:10.995) 0:01:20.400 ************ 2025-05-19 14:54:03.845847 | orchestrator | 2025-05-19 14:54:03.845857 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 14:54:03.845875 | orchestrator | Monday 19 May 2025 14:53:27 +0000 (0:00:00.204) 0:01:20.605 ************ 2025-05-19 14:54:03.845885 | orchestrator | 2025-05-19 14:54:03.845896 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 14:54:03.845906 | orchestrator | Monday 19 May 2025 14:53:27 +0000 (0:00:00.208) 0:01:20.813 ************ 2025-05-19 14:54:03.845917 | orchestrator | 2025-05-19 14:54:03.845927 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-19 14:54:03.845938 | orchestrator | Monday 19 May 2025 14:53:27 +0000 (0:00:00.367) 0:01:21.180 ************ 2025-05-19 14:54:03.845949 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:54:03.845959 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.845970 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:54:03.845980 | orchestrator | 2025-05-19 14:54:03.845991 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-19 14:54:03.846105 | orchestrator | Monday 19 May 2025 14:53:40 +0000 (0:00:13.302) 0:01:34.483 ************ 2025-05-19 14:54:03.846122 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.846134 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:54:03.846145 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:54:03.846155 | orchestrator | 2025-05-19 14:54:03.846166 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-19 14:54:03.846177 | orchestrator | Monday 19 May 2025 14:53:52 +0000 (0:00:11.102) 0:01:45.585 ************ 2025-05-19 14:54:03.846188 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:54:03.846199 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:54:03.846210 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:54:03.846221 | orchestrator | 2025-05-19 14:54:03.846232 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:54:03.846243 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 14:54:03.846256 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:54:03.846267 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:54:03.846278 | orchestrator | 2025-05-19 14:54:03.846289 | orchestrator | 2025-05-19 14:54:03.846300 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:54:03.846311 | orchestrator | Monday 19 May 2025 14:54:00 +0000 (0:00:08.614) 0:01:54.199 ************ 2025-05-19 14:54:03.846321 | orchestrator | =============================================================================== 2025-05-19 14:54:03.846332 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.42s 2025-05-19 14:54:03.846350 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.30s 2025-05-19 14:54:03.846362 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.10s 2025-05-19 14:54:03.846373 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.00s 2025-05-19 14:54:03.846383 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.61s 2025-05-19 14:54:03.846394 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.80s 2025-05-19 14:54:03.846405 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.03s 2025-05-19 14:54:03.846416 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.92s 2025-05-19 14:54:03.846427 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.79s 2025-05-19 14:54:03.846437 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.71s 2025-05-19 14:54:03.846448 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.24s 2025-05-19 14:54:03.846459 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.04s 2025-05-19 14:54:03.846478 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.02s 2025-05-19 14:54:03.846490 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.94s 2025-05-19 14:54:03.846499 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.52s 2025-05-19 14:54:03.846509 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.30s 2025-05-19 14:54:03.846518 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.04s 2025-05-19 14:54:03.846527 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.79s 2025-05-19 14:54:03.846537 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.44s 2025-05-19 14:54:03.846546 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.32s 2025-05-19 14:54:03.846556 | orchestrator | 2025-05-19 14:54:03 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:03.846566 | orchestrator | 2025-05-19 14:54:03 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:03.846576 | orchestrator | 2025-05-19 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:06.881830 | orchestrator | 2025-05-19 14:54:06 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:06.884539 | orchestrator | 2025-05-19 14:54:06 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:06.887515 | orchestrator | 2025-05-19 14:54:06 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:06.890140 | orchestrator | 2025-05-19 14:54:06 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:06.890499 | orchestrator | 2025-05-19 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:09.930957 | orchestrator | 2025-05-19 14:54:09 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:09.931467 | orchestrator | 2025-05-19 14:54:09 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:09.932723 | orchestrator | 2025-05-19 14:54:09 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:09.933389 | orchestrator | 2025-05-19 14:54:09 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:09.933430 | orchestrator | 2025-05-19 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:12.975497 | orchestrator | 2025-05-19 14:54:12 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:12.976158 | orchestrator | 2025-05-19 14:54:12 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:12.977087 | orchestrator | 2025-05-19 14:54:12 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:12.978303 | orchestrator | 2025-05-19 14:54:12 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:12.978394 | orchestrator | 2025-05-19 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:16.020694 | orchestrator | 2025-05-19 14:54:16 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:16.020800 | orchestrator | 2025-05-19 14:54:16 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:16.021766 | orchestrator | 2025-05-19 14:54:16 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:16.022639 | orchestrator | 2025-05-19 14:54:16 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:16.022715 | orchestrator | 2025-05-19 14:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:19.057989 | orchestrator | 2025-05-19 14:54:19 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:19.058340 | orchestrator | 2025-05-19 14:54:19 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:19.059165 | orchestrator | 2025-05-19 14:54:19 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:19.060363 | orchestrator | 2025-05-19 14:54:19 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:19.060392 | orchestrator | 2025-05-19 14:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:22.107651 | orchestrator | 2025-05-19 14:54:22 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:22.121082 | orchestrator | 2025-05-19 14:54:22 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:22.121138 | orchestrator | 2025-05-19 14:54:22 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:22.121799 | orchestrator | 2025-05-19 14:54:22 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:22.121909 | orchestrator | 2025-05-19 14:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:25.167595 | orchestrator | 2025-05-19 14:54:25 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:25.167705 | orchestrator | 2025-05-19 14:54:25 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:25.168821 | orchestrator | 2025-05-19 14:54:25 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:25.169556 | orchestrator | 2025-05-19 14:54:25 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:25.169580 | orchestrator | 2025-05-19 14:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:28.213471 | orchestrator | 2025-05-19 14:54:28 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:28.218082 | orchestrator | 2025-05-19 14:54:28 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:28.218210 | orchestrator | 2025-05-19 14:54:28 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:28.219053 | orchestrator | 2025-05-19 14:54:28 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:28.219090 | orchestrator | 2025-05-19 14:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:31.266254 | orchestrator | 2025-05-19 14:54:31 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:31.266363 | orchestrator | 2025-05-19 14:54:31 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:31.266378 | orchestrator | 2025-05-19 14:54:31 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:31.266390 | orchestrator | 2025-05-19 14:54:31 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:31.266402 | orchestrator | 2025-05-19 14:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:34.304080 | orchestrator | 2025-05-19 14:54:34 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:34.304697 | orchestrator | 2025-05-19 14:54:34 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:34.305421 | orchestrator | 2025-05-19 14:54:34 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:34.308552 | orchestrator | 2025-05-19 14:54:34 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:34.308592 | orchestrator | 2025-05-19 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:37.332488 | orchestrator | 2025-05-19 14:54:37 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:37.333395 | orchestrator | 2025-05-19 14:54:37 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:37.333946 | orchestrator | 2025-05-19 14:54:37 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:37.335869 | orchestrator | 2025-05-19 14:54:37 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:37.335894 | orchestrator | 2025-05-19 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:40.363694 | orchestrator | 2025-05-19 14:54:40 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:40.364224 | orchestrator | 2025-05-19 14:54:40 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:40.365001 | orchestrator | 2025-05-19 14:54:40 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state STARTED 2025-05-19 14:54:40.365820 | orchestrator | 2025-05-19 14:54:40 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:40.365850 | orchestrator | 2025-05-19 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:43.396411 | orchestrator | 2025-05-19 14:54:43 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:43.396467 | orchestrator | 2025-05-19 14:54:43 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:43.396473 | orchestrator | 2025-05-19 14:54:43 | INFO  | Task 7788854f-36ef-4669-b323-2359e77abc0a is in state SUCCESS 2025-05-19 14:54:43.397334 | orchestrator | 2025-05-19 14:54:43 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:43.397658 | orchestrator | 2025-05-19 14:54:43 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:43.397687 | orchestrator | 2025-05-19 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:46.436488 | orchestrator | 2025-05-19 14:54:46 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:46.437292 | orchestrator | 2025-05-19 14:54:46 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:46.437883 | orchestrator | 2025-05-19 14:54:46 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:46.439188 | orchestrator | 2025-05-19 14:54:46 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:46.439231 | orchestrator | 2025-05-19 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:49.475987 | orchestrator | 2025-05-19 14:54:49 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:49.477834 | orchestrator | 2025-05-19 14:54:49 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:49.480609 | orchestrator | 2025-05-19 14:54:49 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:49.485836 | orchestrator | 2025-05-19 14:54:49 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:49.486116 | orchestrator | 2025-05-19 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:52.524305 | orchestrator | 2025-05-19 14:54:52 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:52.526684 | orchestrator | 2025-05-19 14:54:52 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:52.526734 | orchestrator | 2025-05-19 14:54:52 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:52.528234 | orchestrator | 2025-05-19 14:54:52 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:52.528266 | orchestrator | 2025-05-19 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:55.582363 | orchestrator | 2025-05-19 14:54:55 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:55.585411 | orchestrator | 2025-05-19 14:54:55 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:55.588382 | orchestrator | 2025-05-19 14:54:55 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:55.592079 | orchestrator | 2025-05-19 14:54:55 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:55.592131 | orchestrator | 2025-05-19 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:54:58.650239 | orchestrator | 2025-05-19 14:54:58 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:54:58.652233 | orchestrator | 2025-05-19 14:54:58 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:54:58.654148 | orchestrator | 2025-05-19 14:54:58 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:54:58.658140 | orchestrator | 2025-05-19 14:54:58 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:54:58.658194 | orchestrator | 2025-05-19 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:01.708219 | orchestrator | 2025-05-19 14:55:01 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:01.712760 | orchestrator | 2025-05-19 14:55:01 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:01.712812 | orchestrator | 2025-05-19 14:55:01 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:01.713715 | orchestrator | 2025-05-19 14:55:01 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:01.713864 | orchestrator | 2025-05-19 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:04.756341 | orchestrator | 2025-05-19 14:55:04 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:04.757950 | orchestrator | 2025-05-19 14:55:04 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:04.763472 | orchestrator | 2025-05-19 14:55:04 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:04.765623 | orchestrator | 2025-05-19 14:55:04 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:04.765721 | orchestrator | 2025-05-19 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:07.802777 | orchestrator | 2025-05-19 14:55:07 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:07.803320 | orchestrator | 2025-05-19 14:55:07 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:07.804143 | orchestrator | 2025-05-19 14:55:07 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:07.804864 | orchestrator | 2025-05-19 14:55:07 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:07.804888 | orchestrator | 2025-05-19 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:10.855573 | orchestrator | 2025-05-19 14:55:10 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:10.859529 | orchestrator | 2025-05-19 14:55:10 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:10.861532 | orchestrator | 2025-05-19 14:55:10 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:10.863152 | orchestrator | 2025-05-19 14:55:10 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:10.863379 | orchestrator | 2025-05-19 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:13.896771 | orchestrator | 2025-05-19 14:55:13 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:13.898654 | orchestrator | 2025-05-19 14:55:13 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:13.901178 | orchestrator | 2025-05-19 14:55:13 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:13.902843 | orchestrator | 2025-05-19 14:55:13 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:13.902908 | orchestrator | 2025-05-19 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:16.953126 | orchestrator | 2025-05-19 14:55:16 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:16.953473 | orchestrator | 2025-05-19 14:55:16 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:16.956396 | orchestrator | 2025-05-19 14:55:16 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:16.957754 | orchestrator | 2025-05-19 14:55:16 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:16.958247 | orchestrator | 2025-05-19 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:19.993265 | orchestrator | 2025-05-19 14:55:19 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:19.994199 | orchestrator | 2025-05-19 14:55:19 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:19.995353 | orchestrator | 2025-05-19 14:55:19 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:19.997015 | orchestrator | 2025-05-19 14:55:19 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:19.997189 | orchestrator | 2025-05-19 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:23.044088 | orchestrator | 2025-05-19 14:55:23 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:23.045493 | orchestrator | 2025-05-19 14:55:23 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:23.045523 | orchestrator | 2025-05-19 14:55:23 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:23.045862 | orchestrator | 2025-05-19 14:55:23 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:23.045932 | orchestrator | 2025-05-19 14:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:26.092769 | orchestrator | 2025-05-19 14:55:26 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:26.093007 | orchestrator | 2025-05-19 14:55:26 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:26.094228 | orchestrator | 2025-05-19 14:55:26 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:26.095392 | orchestrator | 2025-05-19 14:55:26 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:26.095476 | orchestrator | 2025-05-19 14:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:29.132025 | orchestrator | 2025-05-19 14:55:29 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:29.132505 | orchestrator | 2025-05-19 14:55:29 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:29.133336 | orchestrator | 2025-05-19 14:55:29 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state STARTED 2025-05-19 14:55:29.134308 | orchestrator | 2025-05-19 14:55:29 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:29.134531 | orchestrator | 2025-05-19 14:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:32.186431 | orchestrator | 2025-05-19 14:55:32 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:32.191212 | orchestrator | 2025-05-19 14:55:32 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:32.192264 | orchestrator | 2025-05-19 14:55:32 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:32.196305 | orchestrator | 2025-05-19 14:55:32 | INFO  | Task 2f985797-1356-404a-8054-ece8e3b4a04a is in state SUCCESS 2025-05-19 14:55:32.196840 | orchestrator | 2025-05-19 14:55:32.196870 | orchestrator | 2025-05-19 14:55:32.196882 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-19 14:55:32.196894 | orchestrator | 2025-05-19 14:55:32.196905 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-19 14:55:32.196917 | orchestrator | Monday 19 May 2025 14:54:05 +0000 (0:00:00.092) 0:00:00.093 ************ 2025-05-19 14:55:32.196929 | orchestrator | changed: [localhost] 2025-05-19 14:55:32.196942 | orchestrator | 2025-05-19 14:55:32.196953 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-19 14:55:32.196965 | orchestrator | Monday 19 May 2025 14:54:06 +0000 (0:00:00.826) 0:00:00.919 ************ 2025-05-19 14:55:32.196976 | orchestrator | changed: [localhost] 2025-05-19 14:55:32.196987 | orchestrator | 2025-05-19 14:55:32.196998 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-19 14:55:32.197010 | orchestrator | Monday 19 May 2025 14:54:35 +0000 (0:00:29.479) 0:00:30.398 ************ 2025-05-19 14:55:32.197021 | orchestrator | changed: [localhost] 2025-05-19 14:55:32.197057 | orchestrator | 2025-05-19 14:55:32.197069 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:55:32.197080 | orchestrator | 2025-05-19 14:55:32.197091 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:55:32.197102 | orchestrator | Monday 19 May 2025 14:54:39 +0000 (0:00:04.339) 0:00:34.738 ************ 2025-05-19 14:55:32.197112 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:32.197123 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:32.197134 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:32.197145 | orchestrator | 2025-05-19 14:55:32.197156 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:55:32.197168 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:00.496) 0:00:35.235 ************ 2025-05-19 14:55:32.197179 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-19 14:55:32.197190 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-19 14:55:32.197202 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-19 14:55:32.197213 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-19 14:55:32.197250 | orchestrator | 2025-05-19 14:55:32.197261 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-19 14:55:32.197272 | orchestrator | skipping: no hosts matched 2025-05-19 14:55:32.197284 | orchestrator | 2025-05-19 14:55:32.197295 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:55:32.197307 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:55:32.197319 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:55:32.197332 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:55:32.197344 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:55:32.197355 | orchestrator | 2025-05-19 14:55:32.197365 | orchestrator | 2025-05-19 14:55:32.197377 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:55:32.197388 | orchestrator | Monday 19 May 2025 14:54:41 +0000 (0:00:01.514) 0:00:36.749 ************ 2025-05-19 14:55:32.197399 | orchestrator | =============================================================================== 2025-05-19 14:55:32.197409 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.48s 2025-05-19 14:55:32.197420 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.34s 2025-05-19 14:55:32.197431 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2025-05-19 14:55:32.197442 | orchestrator | Ensure the destination directory exists --------------------------------- 0.83s 2025-05-19 14:55:32.197511 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-05-19 14:55:32.197571 | orchestrator | 2025-05-19 14:55:32.198542 | orchestrator | 2025-05-19 14:55:32.198603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:55:32.198615 | orchestrator | 2025-05-19 14:55:32.198652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:55:32.198664 | orchestrator | Monday 19 May 2025 14:51:34 +0000 (0:00:00.250) 0:00:00.250 ************ 2025-05-19 14:55:32.198675 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:32.198740 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:32.198753 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:32.198764 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:55:32.198774 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:55:32.198785 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:55:32.198796 | orchestrator | 2025-05-19 14:55:32.198807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:55:32.198818 | orchestrator | Monday 19 May 2025 14:51:35 +0000 (0:00:00.679) 0:00:00.929 ************ 2025-05-19 14:55:32.198829 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-19 14:55:32.198840 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-19 14:55:32.198850 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-19 14:55:32.198861 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-19 14:55:32.198872 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-19 14:55:32.198883 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-19 14:55:32.198894 | orchestrator | 2025-05-19 14:55:32.198914 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-19 14:55:32.198926 | orchestrator | 2025-05-19 14:55:32.198937 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 14:55:32.198948 | orchestrator | Monday 19 May 2025 14:51:36 +0000 (0:00:00.726) 0:00:01.656 ************ 2025-05-19 14:55:32.198982 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:55:32.199009 | orchestrator | 2025-05-19 14:55:32.199020 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-19 14:55:32.199052 | orchestrator | Monday 19 May 2025 14:51:37 +0000 (0:00:01.508) 0:00:03.164 ************ 2025-05-19 14:55:32.199064 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:32.199075 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:32.199086 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:32.199097 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:55:32.199107 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:55:32.199118 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:55:32.199128 | orchestrator | 2025-05-19 14:55:32.199139 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-19 14:55:32.199151 | orchestrator | Monday 19 May 2025 14:51:39 +0000 (0:00:01.426) 0:00:04.591 ************ 2025-05-19 14:55:32.199163 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:32.199175 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:32.199187 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:32.199198 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:55:32.199417 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:55:32.199430 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:55:32.199442 | orchestrator | 2025-05-19 14:55:32.199455 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-19 14:55:32.199467 | orchestrator | Monday 19 May 2025 14:51:40 +0000 (0:00:01.005) 0:00:05.597 ************ 2025-05-19 14:55:32.199480 | orchestrator | ok: [testbed-node-0] => { 2025-05-19 14:55:32.199493 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199506 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199518 | orchestrator | } 2025-05-19 14:55:32.199531 | orchestrator | ok: [testbed-node-1] => { 2025-05-19 14:55:32.199543 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199553 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199564 | orchestrator | } 2025-05-19 14:55:32.199575 | orchestrator | ok: [testbed-node-2] => { 2025-05-19 14:55:32.199586 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199597 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199607 | orchestrator | } 2025-05-19 14:55:32.199618 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 14:55:32.199629 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199640 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199651 | orchestrator | } 2025-05-19 14:55:32.199661 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 14:55:32.199672 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199683 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199694 | orchestrator | } 2025-05-19 14:55:32.199704 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 14:55:32.199715 | orchestrator |  "changed": false, 2025-05-19 14:55:32.199726 | orchestrator |  "msg": "All assertions passed" 2025-05-19 14:55:32.199737 | orchestrator | } 2025-05-19 14:55:32.199747 | orchestrator | 2025-05-19 14:55:32.199758 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-19 14:55:32.199769 | orchestrator | Monday 19 May 2025 14:51:41 +0000 (0:00:00.750) 0:00:06.348 ************ 2025-05-19 14:55:32.199780 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.199791 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.199802 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.199812 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.199823 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.199834 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.199844 | orchestrator | 2025-05-19 14:55:32.199855 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-19 14:55:32.199866 | orchestrator | Monday 19 May 2025 14:51:41 +0000 (0:00:00.536) 0:00:06.885 ************ 2025-05-19 14:55:32.199877 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-19 14:55:32.199888 | orchestrator | 2025-05-19 14:55:32.199899 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-19 14:55:32.199918 | orchestrator | Monday 19 May 2025 14:51:44 +0000 (0:00:03.275) 0:00:10.160 ************ 2025-05-19 14:55:32.199928 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-19 14:55:32.199940 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-19 14:55:32.199951 | orchestrator | 2025-05-19 14:55:32.199975 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-19 14:55:32.199986 | orchestrator | Monday 19 May 2025 14:51:51 +0000 (0:00:06.173) 0:00:16.333 ************ 2025-05-19 14:55:32.199997 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:55:32.200008 | orchestrator | 2025-05-19 14:55:32.200061 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-19 14:55:32.200074 | orchestrator | Monday 19 May 2025 14:51:54 +0000 (0:00:03.108) 0:00:19.442 ************ 2025-05-19 14:55:32.200085 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:55:32.200096 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-19 14:55:32.200107 | orchestrator | 2025-05-19 14:55:32.200117 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-19 14:55:32.200128 | orchestrator | Monday 19 May 2025 14:51:57 +0000 (0:00:03.721) 0:00:23.163 ************ 2025-05-19 14:55:32.200139 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:55:32.200149 | orchestrator | 2025-05-19 14:55:32.200160 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-19 14:55:32.200171 | orchestrator | Monday 19 May 2025 14:52:00 +0000 (0:00:03.127) 0:00:26.291 ************ 2025-05-19 14:55:32.200181 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-19 14:55:32.200192 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-19 14:55:32.200216 | orchestrator | 2025-05-19 14:55:32.200227 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 14:55:32.200244 | orchestrator | Monday 19 May 2025 14:52:08 +0000 (0:00:07.236) 0:00:33.528 ************ 2025-05-19 14:55:32.200255 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.200266 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.200277 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.200288 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.200298 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.200309 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.200320 | orchestrator | 2025-05-19 14:55:32.200331 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-19 14:55:32.200341 | orchestrator | Monday 19 May 2025 14:52:08 +0000 (0:00:00.553) 0:00:34.081 ************ 2025-05-19 14:55:32.200352 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.200363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.200374 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.200384 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.200395 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.200405 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.200416 | orchestrator | 2025-05-19 14:55:32.200427 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-19 14:55:32.200438 | orchestrator | Monday 19 May 2025 14:52:10 +0000 (0:00:01.863) 0:00:35.945 ************ 2025-05-19 14:55:32.200449 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:32.200459 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:32.200470 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:32.200481 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:55:32.200491 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:55:32.200502 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:55:32.200512 | orchestrator | 2025-05-19 14:55:32.200523 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 14:55:32.200534 | orchestrator | Monday 19 May 2025 14:52:11 +0000 (0:00:01.025) 0:00:36.971 ************ 2025-05-19 14:55:32.200552 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.200563 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.200574 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.200584 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.200595 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.200605 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.200616 | orchestrator | 2025-05-19 14:55:32.200627 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-19 14:55:32.200638 | orchestrator | Monday 19 May 2025 14:52:13 +0000 (0:00:02.090) 0:00:39.061 ************ 2025-05-19 14:55:32.200653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.200727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.200739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.200750 | orchestrator | 2025-05-19 14:55:32.200762 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-19 14:55:32.200773 | orchestrator | Monday 19 May 2025 14:52:16 +0000 (0:00:02.701) 0:00:41.763 ************ 2025-05-19 14:55:32.200784 | orchestrator | [WARNING]: Skipped 2025-05-19 14:55:32.200795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-19 14:55:32.200806 | orchestrator | due to this access issue: 2025-05-19 14:55:32.200817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-19 14:55:32.200828 | orchestrator | a directory 2025-05-19 14:55:32.200838 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:55:32.200849 | orchestrator | 2025-05-19 14:55:32.200859 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 14:55:32.200876 | orchestrator | Monday 19 May 2025 14:52:17 +0000 (0:00:00.832) 0:00:42.595 ************ 2025-05-19 14:55:32.200887 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:55:32.200900 | orchestrator | 2025-05-19 14:55:32.200910 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-19 14:55:32.200921 | orchestrator | Monday 19 May 2025 14:52:18 +0000 (0:00:01.141) 0:00:43.736 ************ 2025-05-19 14:55:32.200937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.200979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.200999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.201015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.201086 | orchestrator | 2025-05-19 14:55:32.201100 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-19 14:55:32.201110 | orchestrator | Monday 19 May 2025 14:52:21 +0000 (0:00:03.179) 0:00:46.915 ************ 2025-05-19 14:55:32.201122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201133 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201165 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.201175 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.201192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201202 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.201229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201239 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.201249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201259 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.201269 | orchestrator | 2025-05-19 14:55:32.201278 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-19 14:55:32.201288 | orchestrator | Monday 19 May 2025 14:52:24 +0000 (0:00:02.394) 0:00:49.310 ************ 2025-05-19 14:55:32.201298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201308 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201337 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.201347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201367 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.201377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201387 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.201397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201407 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.201417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201427 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.201436 | orchestrator | 2025-05-19 14:55:32.201446 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-19 14:55:32.201456 | orchestrator | Monday 19 May 2025 14:52:26 +0000 (0:00:02.480) 0:00:51.791 ************ 2025-05-19 14:55:32.201465 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.201484 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.201494 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.201503 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.201512 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.201522 | orchestrator | 2025-05-19 14:55:32.201532 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-19 14:55:32.201552 | orchestrator | Monday 19 May 2025 14:52:28 +0000 (0:00:02.317) 0:00:54.108 ************ 2025-05-19 14:55:32.201562 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201572 | orchestrator | 2025-05-19 14:55:32.201581 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-19 14:55:32.201591 | orchestrator | Monday 19 May 2025 14:52:28 +0000 (0:00:00.156) 0:00:54.265 ************ 2025-05-19 14:55:32.201601 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201610 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.201619 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.201629 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.201638 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.201648 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.201657 | orchestrator | 2025-05-19 14:55:32.201667 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-19 14:55:32.201677 | orchestrator | Monday 19 May 2025 14:52:29 +0000 (0:00:00.813) 0:00:55.079 ************ 2025-05-19 14:55:32.201691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201701 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.201711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.201721 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.201731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.201741 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.202224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.202257 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.202267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202277 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.202293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202304 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.202313 | orchestrator | 2025-05-19 14:55:32.202323 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-19 14:55:32.202332 | orchestrator | Monday 19 May 2025 14:52:32 +0000 (0:00:02.389) 0:00:57.468 ************ 2025-05-19 14:55:32.202342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202450 | orchestrator | 2025-05-19 14:55:32.202460 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-19 14:55:32.202470 | orchestrator | Monday 19 May 2025 14:52:35 +0000 (0:00:03.802) 0:01:01.271 ************ 2025-05-19 14:55:32.202479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.202564 | orchestrator | 2025-05-19 14:55:32.202574 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-19 14:55:32.202583 | orchestrator | Monday 19 May 2025 14:52:43 +0000 (0:00:07.499) 0:01:08.770 ************ 2025-05-19 14:55:32.202600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202611 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.202625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202635 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.202645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202654 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.202665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202712 | orchestrator | 2025-05-19 14:55:32.202721 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-19 14:55:32.202731 | orchestrator | Monday 19 May 2025 14:52:46 +0000 (0:00:03.056) 0:01:11.827 ************ 2025-05-19 14:55:32.202740 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.202750 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:32.202760 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.202769 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.202778 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:32.202788 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:32.202797 | orchestrator | 2025-05-19 14:55:32.202811 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-19 14:55:32.202822 | orchestrator | Monday 19 May 2025 14:52:49 +0000 (0:00:02.924) 0:01:14.751 ************ 2025-05-19 14:55:32.202834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202845 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.202857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202878 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.202890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.202901 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.202918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.202965 | orchestrator | 2025-05-19 14:55:32.202976 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-19 14:55:32.202987 | orchestrator | Monday 19 May 2025 14:52:53 +0000 (0:00:03.621) 0:01:18.372 ************ 2025-05-19 14:55:32.202997 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203008 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203018 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203047 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203059 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203069 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203079 | orchestrator | 2025-05-19 14:55:32.203090 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-19 14:55:32.203101 | orchestrator | Monday 19 May 2025 14:52:55 +0000 (0:00:02.159) 0:01:20.532 ************ 2025-05-19 14:55:32.203111 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203122 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203133 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203144 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203154 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203165 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203175 | orchestrator | 2025-05-19 14:55:32.203185 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-19 14:55:32.203195 | orchestrator | Monday 19 May 2025 14:52:57 +0000 (0:00:02.666) 0:01:23.199 ************ 2025-05-19 14:55:32.203204 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203213 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203223 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203232 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203241 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203251 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203260 | orchestrator | 2025-05-19 14:55:32.203270 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-19 14:55:32.203279 | orchestrator | Monday 19 May 2025 14:53:00 +0000 (0:00:02.160) 0:01:25.360 ************ 2025-05-19 14:55:32.203288 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203298 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203307 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203316 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203326 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203335 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203344 | orchestrator | 2025-05-19 14:55:32.203354 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-19 14:55:32.203363 | orchestrator | Monday 19 May 2025 14:53:02 +0000 (0:00:02.261) 0:01:27.621 ************ 2025-05-19 14:55:32.203372 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203382 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203391 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203400 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203410 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203419 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203428 | orchestrator | 2025-05-19 14:55:32.203442 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-19 14:55:32.203453 | orchestrator | Monday 19 May 2025 14:53:04 +0000 (0:00:01.794) 0:01:29.416 ************ 2025-05-19 14:55:32.203462 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203471 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203481 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203490 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203499 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203515 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203524 | orchestrator | 2025-05-19 14:55:32.203534 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-19 14:55:32.203543 | orchestrator | Monday 19 May 2025 14:53:06 +0000 (0:00:02.255) 0:01:31.671 ************ 2025-05-19 14:55:32.203553 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203562 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203572 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203581 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203591 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203600 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203610 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203619 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203629 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203639 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203648 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 14:55:32.203658 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203667 | orchestrator | 2025-05-19 14:55:32.203677 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-19 14:55:32.203686 | orchestrator | Monday 19 May 2025 14:53:08 +0000 (0:00:02.258) 0:01:33.929 ************ 2025-05-19 14:55:32.203732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203743 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.203763 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203795 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203815 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.203840 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.203850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.203860 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.203869 | orchestrator | 2025-05-19 14:55:32.203879 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-19 14:55:32.203889 | orchestrator | Monday 19 May 2025 14:53:11 +0000 (0:00:02.574) 0:01:36.504 ************ 2025-05-19 14:55:32.203898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.203915 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.203931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203941 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.203956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203966 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.203976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.203986 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.203995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.204005 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.204053 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204063 | orchestrator | 2025-05-19 14:55:32.204073 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-19 14:55:32.204083 | orchestrator | Monday 19 May 2025 14:53:12 +0000 (0:00:01.746) 0:01:38.250 ************ 2025-05-19 14:55:32.204092 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204102 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204111 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204121 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204130 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204145 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204155 | orchestrator | 2025-05-19 14:55:32.204164 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-19 14:55:32.204174 | orchestrator | Monday 19 May 2025 14:53:14 +0000 (0:00:01.877) 0:01:40.128 ************ 2025-05-19 14:55:32.204183 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204193 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204202 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204211 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:55:32.204221 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:55:32.204230 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:55:32.204239 | orchestrator | 2025-05-19 14:55:32.204249 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-19 14:55:32.204259 | orchestrator | Monday 19 May 2025 14:53:18 +0000 (0:00:03.238) 0:01:43.367 ************ 2025-05-19 14:55:32.204268 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204277 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204287 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204296 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204306 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204315 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204324 | orchestrator | 2025-05-19 14:55:32.204333 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-19 14:55:32.204343 | orchestrator | Monday 19 May 2025 14:53:19 +0000 (0:00:01.905) 0:01:45.272 ************ 2025-05-19 14:55:32.204353 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204367 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204376 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204386 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204395 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204404 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204414 | orchestrator | 2025-05-19 14:55:32.204423 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-19 14:55:32.204433 | orchestrator | Monday 19 May 2025 14:53:21 +0000 (0:00:01.896) 0:01:47.168 ************ 2025-05-19 14:55:32.204442 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204452 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204461 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204470 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204480 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204489 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204498 | orchestrator | 2025-05-19 14:55:32.204508 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-19 14:55:32.204523 | orchestrator | Monday 19 May 2025 14:53:23 +0000 (0:00:02.083) 0:01:49.252 ************ 2025-05-19 14:55:32.204533 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204542 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204552 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204561 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204570 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204580 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204589 | orchestrator | 2025-05-19 14:55:32.204599 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-19 14:55:32.204608 | orchestrator | Monday 19 May 2025 14:53:25 +0000 (0:00:01.788) 0:01:51.041 ************ 2025-05-19 14:55:32.204618 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204627 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204637 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204646 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204656 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204665 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204675 | orchestrator | 2025-05-19 14:55:32.204684 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-19 14:55:32.204693 | orchestrator | Monday 19 May 2025 14:53:29 +0000 (0:00:04.163) 0:01:55.204 ************ 2025-05-19 14:55:32.204703 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204712 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204722 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204731 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204740 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204750 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204759 | orchestrator | 2025-05-19 14:55:32.204769 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-19 14:55:32.204778 | orchestrator | Monday 19 May 2025 14:53:32 +0000 (0:00:02.683) 0:01:57.888 ************ 2025-05-19 14:55:32.204788 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204797 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204806 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204816 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204825 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204835 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204844 | orchestrator | 2025-05-19 14:55:32.204853 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-19 14:55:32.204863 | orchestrator | Monday 19 May 2025 14:53:34 +0000 (0:00:02.171) 0:02:00.060 ************ 2025-05-19 14:55:32.204873 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.204882 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204891 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.204901 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.204910 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.204919 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.204929 | orchestrator | 2025-05-19 14:55:32.204939 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-19 14:55:32.204948 | orchestrator | Monday 19 May 2025 14:53:37 +0000 (0:00:02.846) 0:02:02.906 ************ 2025-05-19 14:55:32.204958 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.204967 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.204977 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.204987 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.205001 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.205011 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.205020 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.205054 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.205064 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.205074 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.205084 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 14:55:32.205093 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.205103 | orchestrator | 2025-05-19 14:55:32.205112 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-19 14:55:32.205122 | orchestrator | Monday 19 May 2025 14:53:41 +0000 (0:00:04.209) 0:02:07.116 ************ 2025-05-19 14:55:32.205137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.205147 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.205157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.205167 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.205177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.205187 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.205202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 14:55:32.205218 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.205228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.205238 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.205255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 14:55:32.205265 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.205275 | orchestrator | 2025-05-19 14:55:32.205285 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-19 14:55:32.205294 | orchestrator | Monday 19 May 2025 14:53:45 +0000 (0:00:03.431) 0:02:10.547 ************ 2025-05-19 14:55:32.205304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.205314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.205365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.205377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.205392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 14:55:32.205403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 14:55:32.205413 | orchestrator | 2025-05-19 14:55:32.205422 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 14:55:32.205432 | orchestrator | Monday 19 May 2025 14:53:48 +0000 (0:00:03.040) 0:02:13.588 ************ 2025-05-19 14:55:32.205442 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:32.205451 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:32.205461 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:32.205470 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:55:32.205480 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:55:32.205489 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:55:32.205499 | orchestrator | 2025-05-19 14:55:32.205514 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-19 14:55:32.205524 | orchestrator | Monday 19 May 2025 14:53:49 +0000 (0:00:00.768) 0:02:14.356 ************ 2025-05-19 14:55:32.205533 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:32.205543 | orchestrator | 2025-05-19 14:55:32.205552 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-19 14:55:32.205562 | orchestrator | Monday 19 May 2025 14:53:51 +0000 (0:00:01.978) 0:02:16.335 ************ 2025-05-19 14:55:32.205571 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:32.205581 | orchestrator | 2025-05-19 14:55:32.205591 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-19 14:55:32.205600 | orchestrator | Monday 19 May 2025 14:53:53 +0000 (0:00:02.335) 0:02:18.670 ************ 2025-05-19 14:55:32.205610 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:32.205619 | orchestrator | 2025-05-19 14:55:32.205629 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205638 | orchestrator | Monday 19 May 2025 14:54:34 +0000 (0:00:40.742) 0:02:59.412 ************ 2025-05-19 14:55:32.205648 | orchestrator | 2025-05-19 14:55:32.205657 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205667 | orchestrator | Monday 19 May 2025 14:54:34 +0000 (0:00:00.147) 0:02:59.560 ************ 2025-05-19 14:55:32.205676 | orchestrator | 2025-05-19 14:55:32.205686 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205700 | orchestrator | Monday 19 May 2025 14:54:34 +0000 (0:00:00.619) 0:03:00.179 ************ 2025-05-19 14:55:32.205710 | orchestrator | 2025-05-19 14:55:32.205720 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205729 | orchestrator | Monday 19 May 2025 14:54:34 +0000 (0:00:00.082) 0:03:00.261 ************ 2025-05-19 14:55:32.205739 | orchestrator | 2025-05-19 14:55:32.205748 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205758 | orchestrator | Monday 19 May 2025 14:54:35 +0000 (0:00:00.124) 0:03:00.386 ************ 2025-05-19 14:55:32.205767 | orchestrator | 2025-05-19 14:55:32.205777 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 14:55:32.205786 | orchestrator | Monday 19 May 2025 14:54:35 +0000 (0:00:00.079) 0:03:00.465 ************ 2025-05-19 14:55:32.205796 | orchestrator | 2025-05-19 14:55:32.205805 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-19 14:55:32.205815 | orchestrator | Monday 19 May 2025 14:54:35 +0000 (0:00:00.069) 0:03:00.534 ************ 2025-05-19 14:55:32.205824 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:32.205834 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:32.205843 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:32.205853 | orchestrator | 2025-05-19 14:55:32.205862 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-19 14:55:32.205872 | orchestrator | Monday 19 May 2025 14:55:05 +0000 (0:00:29.886) 0:03:30.421 ************ 2025-05-19 14:55:32.205882 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:55:32.205895 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:55:32.205905 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:55:32.205915 | orchestrator | 2025-05-19 14:55:32.205924 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:55:32.205934 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 14:55:32.205944 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 14:55:32.205954 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 14:55:32.205964 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 14:55:32.205979 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 14:55:32.205989 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 14:55:32.205998 | orchestrator | 2025-05-19 14:55:32.206008 | orchestrator | 2025-05-19 14:55:32.206119 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:55:32.206130 | orchestrator | Monday 19 May 2025 14:55:29 +0000 (0:00:24.105) 0:03:54.526 ************ 2025-05-19 14:55:32.206139 | orchestrator | =============================================================================== 2025-05-19 14:55:32.206149 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.74s 2025-05-19 14:55:32.206158 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.89s 2025-05-19 14:55:32.206168 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 24.11s 2025-05-19 14:55:32.206178 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.50s 2025-05-19 14:55:32.206187 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.24s 2025-05-19 14:55:32.206196 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.17s 2025-05-19 14:55:32.206206 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.21s 2025-05-19 14:55:32.206215 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.16s 2025-05-19 14:55:32.206223 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.80s 2025-05-19 14:55:32.206231 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.72s 2025-05-19 14:55:32.206239 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.62s 2025-05-19 14:55:32.206247 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.43s 2025-05-19 14:55:32.206254 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.28s 2025-05-19 14:55:32.206262 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.24s 2025-05-19 14:55:32.206270 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.18s 2025-05-19 14:55:32.206278 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.13s 2025-05-19 14:55:32.206285 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.11s 2025-05-19 14:55:32.206293 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.06s 2025-05-19 14:55:32.206301 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.04s 2025-05-19 14:55:32.206309 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.92s 2025-05-19 14:55:32.206322 | orchestrator | 2025-05-19 14:55:32 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:32.206330 | orchestrator | 2025-05-19 14:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:35.233244 | orchestrator | 2025-05-19 14:55:35 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:35.234108 | orchestrator | 2025-05-19 14:55:35 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:35.234943 | orchestrator | 2025-05-19 14:55:35 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:35.235641 | orchestrator | 2025-05-19 14:55:35 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:35.235805 | orchestrator | 2025-05-19 14:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:38.281506 | orchestrator | 2025-05-19 14:55:38 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:38.282258 | orchestrator | 2025-05-19 14:55:38 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:38.283709 | orchestrator | 2025-05-19 14:55:38 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:38.285113 | orchestrator | 2025-05-19 14:55:38 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:38.285200 | orchestrator | 2025-05-19 14:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:41.338975 | orchestrator | 2025-05-19 14:55:41 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:41.340216 | orchestrator | 2025-05-19 14:55:41 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:41.342321 | orchestrator | 2025-05-19 14:55:41 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:41.344201 | orchestrator | 2025-05-19 14:55:41 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:41.344285 | orchestrator | 2025-05-19 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:44.393528 | orchestrator | 2025-05-19 14:55:44 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:44.393636 | orchestrator | 2025-05-19 14:55:44 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:44.395560 | orchestrator | 2025-05-19 14:55:44 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:44.397026 | orchestrator | 2025-05-19 14:55:44 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:44.397084 | orchestrator | 2025-05-19 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:47.451841 | orchestrator | 2025-05-19 14:55:47 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state STARTED 2025-05-19 14:55:47.452557 | orchestrator | 2025-05-19 14:55:47 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:47.454837 | orchestrator | 2025-05-19 14:55:47 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:47.455776 | orchestrator | 2025-05-19 14:55:47 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:47.455786 | orchestrator | 2025-05-19 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:50.506882 | orchestrator | 2025-05-19 14:55:50.506988 | orchestrator | 2025-05-19 14:55:50 | INFO  | Task facc8583-dae3-45d2-b5a2-054db528afae is in state SUCCESS 2025-05-19 14:55:50.507998 | orchestrator | 2025-05-19 14:55:50.508060 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:55:50.508075 | orchestrator | 2025-05-19 14:55:50.508087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:55:50.508098 | orchestrator | Monday 19 May 2025 14:52:57 +0000 (0:00:00.243) 0:00:00.243 ************ 2025-05-19 14:55:50.508109 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:50.508121 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:50.508132 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:50.508143 | orchestrator | 2025-05-19 14:55:50.508154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:55:50.508250 | orchestrator | Monday 19 May 2025 14:52:58 +0000 (0:00:00.276) 0:00:00.520 ************ 2025-05-19 14:55:50.508265 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-19 14:55:50.508276 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-19 14:55:50.508287 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-19 14:55:50.508323 | orchestrator | 2025-05-19 14:55:50.508335 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-19 14:55:50.508346 | orchestrator | 2025-05-19 14:55:50.508357 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 14:55:50.508458 | orchestrator | Monday 19 May 2025 14:52:58 +0000 (0:00:00.676) 0:00:01.196 ************ 2025-05-19 14:55:50.508469 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:55:50.508481 | orchestrator | 2025-05-19 14:55:50.508492 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-19 14:55:50.508503 | orchestrator | Monday 19 May 2025 14:52:59 +0000 (0:00:00.839) 0:00:02.036 ************ 2025-05-19 14:55:50.508513 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-19 14:55:50.508524 | orchestrator | 2025-05-19 14:55:50.508534 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-19 14:55:50.508545 | orchestrator | Monday 19 May 2025 14:53:02 +0000 (0:00:03.311) 0:00:05.347 ************ 2025-05-19 14:55:50.508555 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-19 14:55:50.508566 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-19 14:55:50.508577 | orchestrator | 2025-05-19 14:55:50.508587 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-19 14:55:50.508599 | orchestrator | Monday 19 May 2025 14:53:08 +0000 (0:00:06.002) 0:00:11.350 ************ 2025-05-19 14:55:50.508611 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:55:50.508623 | orchestrator | 2025-05-19 14:55:50.508635 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-19 14:55:50.508661 | orchestrator | Monday 19 May 2025 14:53:12 +0000 (0:00:03.239) 0:00:14.589 ************ 2025-05-19 14:55:50.508673 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:55:50.508685 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-19 14:55:50.508697 | orchestrator | 2025-05-19 14:55:50.508709 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-19 14:55:50.508720 | orchestrator | Monday 19 May 2025 14:53:16 +0000 (0:00:03.792) 0:00:18.381 ************ 2025-05-19 14:55:50.508731 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:55:50.508743 | orchestrator | 2025-05-19 14:55:50.508754 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-19 14:55:50.508766 | orchestrator | Monday 19 May 2025 14:53:19 +0000 (0:00:03.241) 0:00:21.623 ************ 2025-05-19 14:55:50.508778 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-19 14:55:50.508789 | orchestrator | 2025-05-19 14:55:50.508801 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-19 14:55:50.508813 | orchestrator | Monday 19 May 2025 14:53:23 +0000 (0:00:03.919) 0:00:25.543 ************ 2025-05-19 14:55:50.508829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.508860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.508882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.508897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.508990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509303 | orchestrator | 2025-05-19 14:55:50.509314 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-19 14:55:50.509325 | orchestrator | Monday 19 May 2025 14:53:26 +0000 (0:00:02.996) 0:00:28.540 ************ 2025-05-19 14:55:50.509336 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.509347 | orchestrator | 2025-05-19 14:55:50.509358 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-19 14:55:50.509368 | orchestrator | Monday 19 May 2025 14:53:26 +0000 (0:00:00.222) 0:00:28.763 ************ 2025-05-19 14:55:50.509378 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.509389 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.509400 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.509410 | orchestrator | 2025-05-19 14:55:50.509421 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 14:55:50.509431 | orchestrator | Monday 19 May 2025 14:53:26 +0000 (0:00:00.463) 0:00:29.227 ************ 2025-05-19 14:55:50.509442 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:55:50.509463 | orchestrator | 2025-05-19 14:55:50.509474 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-19 14:55:50.509485 | orchestrator | Monday 19 May 2025 14:53:28 +0000 (0:00:01.837) 0:00:31.064 ************ 2025-05-19 14:55:50.509496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.509515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.509527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.509544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.509748 | orchestrator | 2025-05-19 14:55:50.509758 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-19 14:55:50.509768 | orchestrator | Monday 19 May 2025 14:53:35 +0000 (0:00:07.056) 0:00:38.120 ************ 2025-05-19 14:55:50.509778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.509788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.509804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.509814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.509824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.509838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.509855 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.509892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.509902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.510615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510695 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.510705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.510715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.510737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510787 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.510797 | orchestrator | 2025-05-19 14:55:50.510807 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-19 14:55:50.510817 | orchestrator | Monday 19 May 2025 14:53:37 +0000 (0:00:01.892) 0:00:40.013 ************ 2025-05-19 14:55:50.510826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.510836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.510852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510902 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.510913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.510923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.510938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.510983 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.510997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.511007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.511017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511091 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.511101 | orchestrator | 2025-05-19 14:55:50.511131 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-19 14:55:50.511152 | orchestrator | Monday 19 May 2025 14:53:39 +0000 (0:00:02.068) 0:00:42.081 ************ 2025-05-19 14:55:50.511167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511415 | orchestrator | 2025-05-19 14:55:50.511425 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-19 14:55:50.511437 | orchestrator | Monday 19 May 2025 14:53:46 +0000 (0:00:06.918) 0:00:48.999 ************ 2025-05-19 14:55:50.511453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.511493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511692 | orchestrator | 2025-05-19 14:55:50.511702 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-19 14:55:50.511711 | orchestrator | Monday 19 May 2025 14:54:01 +0000 (0:00:14.837) 0:01:03.837 ************ 2025-05-19 14:55:50.511721 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 14:55:50.511730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 14:55:50.511740 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 14:55:50.511749 | orchestrator | 2025-05-19 14:55:50.511759 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-19 14:55:50.511768 | orchestrator | Monday 19 May 2025 14:54:05 +0000 (0:00:03.592) 0:01:07.429 ************ 2025-05-19 14:55:50.511777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 14:55:50.511787 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 14:55:50.511796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 14:55:50.511805 | orchestrator | 2025-05-19 14:55:50.511814 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-19 14:55:50.511824 | orchestrator | Monday 19 May 2025 14:54:07 +0000 (0:00:02.135) 0:01:09.565 ************ 2025-05-19 14:55:50.511837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.511848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.511869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.511880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.511976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.511990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512082 | orchestrator | 2025-05-19 14:55:50.512092 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-19 14:55:50.512101 | orchestrator | Monday 19 May 2025 14:54:09 +0000 (0:00:02.783) 0:01:12.348 ************ 2025-05-19 14:55:50.512115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512418 | orchestrator | 2025-05-19 14:55:50.512427 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 14:55:50.512437 | orchestrator | Monday 19 May 2025 14:54:12 +0000 (0:00:02.834) 0:01:15.183 ************ 2025-05-19 14:55:50.512447 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.512456 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.512466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.512475 | orchestrator | 2025-05-19 14:55:50.512485 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-19 14:55:50.512494 | orchestrator | Monday 19 May 2025 14:54:13 +0000 (0:00:01.134) 0:01:16.317 ************ 2025-05-19 14:55:50.512504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.512535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512582 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.512592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.512622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512668 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.512678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 14:55:50.512697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 14:55:50.512707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 14:55:50.512752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.512762 | orchestrator | 2025-05-19 14:55:50.512771 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-19 14:55:50.512781 | orchestrator | Monday 19 May 2025 14:54:15 +0000 (0:00:01.802) 0:01:18.120 ************ 2025-05-19 14:55:50.512791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.512810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.512821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 14:55:50.512831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.512994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.513008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 14:55:50.513018 | orchestrator | 2025-05-19 14:55:50.513057 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 14:55:50.513068 | orchestrator | Monday 19 May 2025 14:54:20 +0000 (0:00:04.517) 0:01:22.637 ************ 2025-05-19 14:55:50.513079 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:50.513090 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:50.513100 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:50.513111 | orchestrator | 2025-05-19 14:55:50.513121 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-19 14:55:50.513137 | orchestrator | Monday 19 May 2025 14:54:20 +0000 (0:00:00.271) 0:01:22.909 ************ 2025-05-19 14:55:50.513148 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-19 14:55:50.513245 | orchestrator | 2025-05-19 14:55:50.513260 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-19 14:55:50.513272 | orchestrator | Monday 19 May 2025 14:54:23 +0000 (0:00:02.448) 0:01:25.358 ************ 2025-05-19 14:55:50.513283 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:55:50.513294 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-19 14:55:50.513305 | orchestrator | 2025-05-19 14:55:50.513315 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-19 14:55:50.513324 | orchestrator | Monday 19 May 2025 14:54:25 +0000 (0:00:02.119) 0:01:27.478 ************ 2025-05-19 14:55:50.513333 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513342 | orchestrator | 2025-05-19 14:55:50.513352 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 14:55:50.513361 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:15.029) 0:01:42.507 ************ 2025-05-19 14:55:50.513370 | orchestrator | 2025-05-19 14:55:50.513380 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 14:55:50.513389 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:00.120) 0:01:42.627 ************ 2025-05-19 14:55:50.513398 | orchestrator | 2025-05-19 14:55:50.513408 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 14:55:50.513417 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:00.199) 0:01:42.827 ************ 2025-05-19 14:55:50.513426 | orchestrator | 2025-05-19 14:55:50.513436 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-19 14:55:50.513445 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:00.180) 0:01:43.007 ************ 2025-05-19 14:55:50.513454 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513463 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513473 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513482 | orchestrator | 2025-05-19 14:55:50.513491 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-19 14:55:50.513506 | orchestrator | Monday 19 May 2025 14:54:53 +0000 (0:00:13.045) 0:01:56.053 ************ 2025-05-19 14:55:50.513515 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513525 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513534 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513543 | orchestrator | 2025-05-19 14:55:50.513552 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-19 14:55:50.513561 | orchestrator | Monday 19 May 2025 14:55:04 +0000 (0:00:10.659) 0:02:06.713 ************ 2025-05-19 14:55:50.513571 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513580 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513589 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513598 | orchestrator | 2025-05-19 14:55:50.513608 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-19 14:55:50.513617 | orchestrator | Monday 19 May 2025 14:55:16 +0000 (0:00:11.764) 0:02:18.477 ************ 2025-05-19 14:55:50.513626 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513635 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513645 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513654 | orchestrator | 2025-05-19 14:55:50.513663 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-19 14:55:50.513672 | orchestrator | Monday 19 May 2025 14:55:26 +0000 (0:00:10.591) 0:02:29.068 ************ 2025-05-19 14:55:50.513682 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513691 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513700 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513710 | orchestrator | 2025-05-19 14:55:50.513719 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-19 14:55:50.513735 | orchestrator | Monday 19 May 2025 14:55:32 +0000 (0:00:05.672) 0:02:34.741 ************ 2025-05-19 14:55:50.513745 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513754 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:50.513763 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:50.513772 | orchestrator | 2025-05-19 14:55:50.513781 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-19 14:55:50.513791 | orchestrator | Monday 19 May 2025 14:55:43 +0000 (0:00:10.642) 0:02:45.383 ************ 2025-05-19 14:55:50.513800 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:50.513809 | orchestrator | 2025-05-19 14:55:50.513818 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:55:50.513828 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 14:55:50.513838 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:55:50.513847 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:55:50.513856 | orchestrator | 2025-05-19 14:55:50.513866 | orchestrator | 2025-05-19 14:55:50.513881 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:55:50.513891 | orchestrator | Monday 19 May 2025 14:55:49 +0000 (0:00:06.564) 0:02:51.947 ************ 2025-05-19 14:55:50.513901 | orchestrator | =============================================================================== 2025-05-19 14:55:50.513910 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.03s 2025-05-19 14:55:50.513920 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.84s 2025-05-19 14:55:50.513929 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.05s 2025-05-19 14:55:50.513938 | orchestrator | designate : Restart designate-central container ------------------------ 11.76s 2025-05-19 14:55:50.513948 | orchestrator | designate : Restart designate-api container ---------------------------- 10.66s 2025-05-19 14:55:50.513957 | orchestrator | designate : Restart designate-worker container ------------------------- 10.64s 2025-05-19 14:55:50.513966 | orchestrator | designate : Restart designate-producer container ----------------------- 10.59s 2025-05-19 14:55:50.513975 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.06s 2025-05-19 14:55:50.513985 | orchestrator | designate : Copying over config.json files for services ----------------- 6.92s 2025-05-19 14:55:50.513994 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.56s 2025-05-19 14:55:50.514003 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.00s 2025-05-19 14:55:50.514012 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.67s 2025-05-19 14:55:50.514093 | orchestrator | designate : Check designate containers ---------------------------------- 4.52s 2025-05-19 14:55:50.514104 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.92s 2025-05-19 14:55:50.514113 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.79s 2025-05-19 14:55:50.514123 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.59s 2025-05-19 14:55:50.514132 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.31s 2025-05-19 14:55:50.514141 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.24s 2025-05-19 14:55:50.514151 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.24s 2025-05-19 14:55:50.514160 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.00s 2025-05-19 14:55:50.514169 | orchestrator | 2025-05-19 14:55:50 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:50.514185 | orchestrator | 2025-05-19 14:55:50 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:50.516212 | orchestrator | 2025-05-19 14:55:50 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:50.516231 | orchestrator | 2025-05-19 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:53.580194 | orchestrator | 2025-05-19 14:55:53 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:53.583531 | orchestrator | 2025-05-19 14:55:53 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:55:53.586314 | orchestrator | 2025-05-19 14:55:53 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:53.589483 | orchestrator | 2025-05-19 14:55:53 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state STARTED 2025-05-19 14:55:53.589557 | orchestrator | 2025-05-19 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:56.630762 | orchestrator | 2025-05-19 14:55:56 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:56.634221 | orchestrator | 2025-05-19 14:55:56 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:55:56.634279 | orchestrator | 2025-05-19 14:55:56 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:56.635495 | orchestrator | 2025-05-19 14:55:56 | INFO  | Task 16c7b565-e3ab-4406-834e-d1dda496c825 is in state STARTED 2025-05-19 14:55:56.639681 | orchestrator | 2025-05-19 14:55:56.639749 | orchestrator | 2025-05-19 14:55:56.639772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:55:56.639795 | orchestrator | 2025-05-19 14:55:56.639814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:55:56.639833 | orchestrator | Monday 19 May 2025 14:54:47 +0000 (0:00:00.198) 0:00:00.198 ************ 2025-05-19 14:55:56.639851 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:55:56.639871 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:55:56.639888 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:55:56.639907 | orchestrator | 2025-05-19 14:55:56.639949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:55:56.639967 | orchestrator | Monday 19 May 2025 14:54:47 +0000 (0:00:00.230) 0:00:00.429 ************ 2025-05-19 14:55:56.639989 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-19 14:55:56.640007 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-19 14:55:56.640025 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-19 14:55:56.640072 | orchestrator | 2025-05-19 14:55:56.640090 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-19 14:55:56.640108 | orchestrator | 2025-05-19 14:55:56.640124 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 14:55:56.640141 | orchestrator | Monday 19 May 2025 14:54:47 +0000 (0:00:00.330) 0:00:00.760 ************ 2025-05-19 14:55:56.640159 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:55:56.640179 | orchestrator | 2025-05-19 14:55:56.640196 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-19 14:55:56.640213 | orchestrator | Monday 19 May 2025 14:54:48 +0000 (0:00:00.405) 0:00:01.166 ************ 2025-05-19 14:55:56.640232 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-19 14:55:56.640253 | orchestrator | 2025-05-19 14:55:56.640275 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-19 14:55:56.640295 | orchestrator | Monday 19 May 2025 14:54:51 +0000 (0:00:03.248) 0:00:04.414 ************ 2025-05-19 14:55:56.640316 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-19 14:55:56.640370 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-19 14:55:56.640392 | orchestrator | 2025-05-19 14:55:56.640413 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-19 14:55:56.640433 | orchestrator | Monday 19 May 2025 14:54:57 +0000 (0:00:06.222) 0:00:10.636 ************ 2025-05-19 14:55:56.640454 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:55:56.640475 | orchestrator | 2025-05-19 14:55:56.640495 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-19 14:55:56.640515 | orchestrator | Monday 19 May 2025 14:55:01 +0000 (0:00:03.181) 0:00:13.817 ************ 2025-05-19 14:55:56.640536 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:55:56.640554 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-19 14:55:56.640576 | orchestrator | 2025-05-19 14:55:56.640594 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-19 14:55:56.640613 | orchestrator | Monday 19 May 2025 14:55:04 +0000 (0:00:03.883) 0:00:17.701 ************ 2025-05-19 14:55:56.640632 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:55:56.640649 | orchestrator | 2025-05-19 14:55:56.640668 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-19 14:55:56.640687 | orchestrator | Monday 19 May 2025 14:55:08 +0000 (0:00:03.403) 0:00:21.104 ************ 2025-05-19 14:55:56.640704 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-19 14:55:56.640723 | orchestrator | 2025-05-19 14:55:56.640741 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 14:55:56.640760 | orchestrator | Monday 19 May 2025 14:55:12 +0000 (0:00:03.941) 0:00:25.046 ************ 2025-05-19 14:55:56.640778 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.640814 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:56.640833 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:56.640851 | orchestrator | 2025-05-19 14:55:56.640869 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-19 14:55:56.640888 | orchestrator | Monday 19 May 2025 14:55:12 +0000 (0:00:00.207) 0:00:25.254 ************ 2025-05-19 14:55:56.640913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.640960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.640996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641016 | orchestrator | 2025-05-19 14:55:56.641125 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-19 14:55:56.641147 | orchestrator | Monday 19 May 2025 14:55:13 +0000 (0:00:00.829) 0:00:26.083 ************ 2025-05-19 14:55:56.641166 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.641185 | orchestrator | 2025-05-19 14:55:56.641203 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-19 14:55:56.641222 | orchestrator | Monday 19 May 2025 14:55:13 +0000 (0:00:00.113) 0:00:26.197 ************ 2025-05-19 14:55:56.641241 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.641258 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:56.641277 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:56.641294 | orchestrator | 2025-05-19 14:55:56.641311 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 14:55:56.641329 | orchestrator | Monday 19 May 2025 14:55:13 +0000 (0:00:00.369) 0:00:26.567 ************ 2025-05-19 14:55:56.641347 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:55:56.641365 | orchestrator | 2025-05-19 14:55:56.641383 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-19 14:55:56.641399 | orchestrator | Monday 19 May 2025 14:55:14 +0000 (0:00:00.437) 0:00:27.004 ************ 2025-05-19 14:55:56.641427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641515 | orchestrator | 2025-05-19 14:55:56.641532 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-19 14:55:56.641549 | orchestrator | Monday 19 May 2025 14:55:15 +0000 (0:00:01.379) 0:00:28.384 ************ 2025-05-19 14:55:56.641566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641584 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.641607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641626 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:56.641652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641678 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:56.641694 | orchestrator | 2025-05-19 14:55:56.641711 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-19 14:55:56.641727 | orchestrator | Monday 19 May 2025 14:55:16 +0000 (0:00:00.638) 0:00:29.022 ************ 2025-05-19 14:55:56.641742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641759 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.641777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641794 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:56.641817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.641837 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:56.641853 | orchestrator | 2025-05-19 14:55:56.641868 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-19 14:55:56.641885 | orchestrator | Monday 19 May 2025 14:55:17 +0000 (0:00:00.797) 0:00:29.820 ************ 2025-05-19 14:55:56.641910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.641976 | orchestrator | 2025-05-19 14:55:56.641993 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-19 14:55:56.642009 | orchestrator | Monday 19 May 2025 14:55:18 +0000 (0:00:01.452) 0:00:31.272 ************ 2025-05-19 14:55:56.642119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642199 | orchestrator | 2025-05-19 14:55:56.642216 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-19 14:55:56.642233 | orchestrator | Monday 19 May 2025 14:55:21 +0000 (0:00:03.014) 0:00:34.287 ************ 2025-05-19 14:55:56.642250 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 14:55:56.642270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 14:55:56.642287 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 14:55:56.642303 | orchestrator | 2025-05-19 14:55:56.642321 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-19 14:55:56.642337 | orchestrator | Monday 19 May 2025 14:55:23 +0000 (0:00:01.596) 0:00:35.883 ************ 2025-05-19 14:55:56.642355 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:56.642372 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:56.642388 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:56.642403 | orchestrator | 2025-05-19 14:55:56.642420 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-19 14:55:56.642438 | orchestrator | Monday 19 May 2025 14:55:24 +0000 (0:00:01.293) 0:00:37.177 ************ 2025-05-19 14:55:56.642457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.642475 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:55:56.642502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.642532 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:55:56.642562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 14:55:56.642582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:55:56.642601 | orchestrator | 2025-05-19 14:55:56.642618 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-19 14:55:56.642635 | orchestrator | Monday 19 May 2025 14:55:24 +0000 (0:00:00.501) 0:00:37.679 ************ 2025-05-19 14:55:56.642653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 14:55:56.642733 | orchestrator | 2025-05-19 14:55:56.642750 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-19 14:55:56.642767 | orchestrator | Monday 19 May 2025 14:55:26 +0000 (0:00:01.422) 0:00:39.102 ************ 2025-05-19 14:55:56.642785 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:56.642801 | orchestrator | 2025-05-19 14:55:56.642817 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-19 14:55:56.642835 | orchestrator | Monday 19 May 2025 14:55:28 +0000 (0:00:02.227) 0:00:41.329 ************ 2025-05-19 14:55:56.642854 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:56.642870 | orchestrator | 2025-05-19 14:55:56.642888 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-19 14:55:56.642905 | orchestrator | Monday 19 May 2025 14:55:30 +0000 (0:00:02.179) 0:00:43.508 ************ 2025-05-19 14:55:56.642923 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:56.642941 | orchestrator | 2025-05-19 14:55:56.642958 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 14:55:56.642976 | orchestrator | Monday 19 May 2025 14:55:43 +0000 (0:00:13.064) 0:00:56.573 ************ 2025-05-19 14:55:56.642994 | orchestrator | 2025-05-19 14:55:56.643010 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 14:55:56.643050 | orchestrator | Monday 19 May 2025 14:55:43 +0000 (0:00:00.063) 0:00:56.637 ************ 2025-05-19 14:55:56.643068 | orchestrator | 2025-05-19 14:55:56.643095 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 14:55:56.643111 | orchestrator | Monday 19 May 2025 14:55:43 +0000 (0:00:00.058) 0:00:56.695 ************ 2025-05-19 14:55:56.643127 | orchestrator | 2025-05-19 14:55:56.643142 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-19 14:55:56.643157 | orchestrator | Monday 19 May 2025 14:55:43 +0000 (0:00:00.073) 0:00:56.769 ************ 2025-05-19 14:55:56.643173 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:55:56.643189 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:55:56.643206 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:55:56.643223 | orchestrator | 2025-05-19 14:55:56.643239 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:55:56.643256 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:55:56.643275 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:55:56.643292 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:55:56.643308 | orchestrator | 2025-05-19 14:55:56.643323 | orchestrator | 2025-05-19 14:55:56.643341 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:55:56.643359 | orchestrator | Monday 19 May 2025 14:55:53 +0000 (0:00:09.708) 0:01:06.478 ************ 2025-05-19 14:55:56.643375 | orchestrator | =============================================================================== 2025-05-19 14:55:56.643391 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.06s 2025-05-19 14:55:56.643407 | orchestrator | placement : Restart placement-api container ----------------------------- 9.71s 2025-05-19 14:55:56.643423 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.22s 2025-05-19 14:55:56.643452 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.94s 2025-05-19 14:55:56.643469 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.88s 2025-05-19 14:55:56.643486 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.40s 2025-05-19 14:55:56.643502 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.25s 2025-05-19 14:55:56.643519 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.18s 2025-05-19 14:55:56.643533 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.01s 2025-05-19 14:55:56.643547 | orchestrator | placement : Creating placement databases -------------------------------- 2.23s 2025-05-19 14:55:56.643560 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.17s 2025-05-19 14:55:56.643572 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.60s 2025-05-19 14:55:56.643583 | orchestrator | placement : Copying over config.json files for services ----------------- 1.45s 2025-05-19 14:55:56.643597 | orchestrator | placement : Check placement containers ---------------------------------- 1.42s 2025-05-19 14:55:56.643610 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.38s 2025-05-19 14:55:56.643622 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.29s 2025-05-19 14:55:56.643635 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2025-05-19 14:55:56.643647 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.80s 2025-05-19 14:55:56.643661 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.64s 2025-05-19 14:55:56.643673 | orchestrator | placement : Copying over existing policy file --------------------------- 0.50s 2025-05-19 14:55:56.643694 | orchestrator | 2025-05-19 14:55:56 | INFO  | Task 00acbb28-b103-4425-9b74-2da7b5f729d4 is in state SUCCESS 2025-05-19 14:55:56.643707 | orchestrator | 2025-05-19 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:55:59.710316 | orchestrator | 2025-05-19 14:55:59 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:55:59.710424 | orchestrator | 2025-05-19 14:55:59 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:55:59.710691 | orchestrator | 2025-05-19 14:55:59 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:55:59.711752 | orchestrator | 2025-05-19 14:55:59 | INFO  | Task 16c7b565-e3ab-4406-834e-d1dda496c825 is in state STARTED 2025-05-19 14:55:59.711779 | orchestrator | 2025-05-19 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:02.758886 | orchestrator | 2025-05-19 14:56:02 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:02.760627 | orchestrator | 2025-05-19 14:56:02 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:02.764561 | orchestrator | 2025-05-19 14:56:02 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:02.766176 | orchestrator | 2025-05-19 14:56:02 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:02.767823 | orchestrator | 2025-05-19 14:56:02 | INFO  | Task 16c7b565-e3ab-4406-834e-d1dda496c825 is in state SUCCESS 2025-05-19 14:56:02.768168 | orchestrator | 2025-05-19 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:05.815489 | orchestrator | 2025-05-19 14:56:05 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:05.817576 | orchestrator | 2025-05-19 14:56:05 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:05.819416 | orchestrator | 2025-05-19 14:56:05 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:05.821695 | orchestrator | 2025-05-19 14:56:05 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:05.821883 | orchestrator | 2025-05-19 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:08.871376 | orchestrator | 2025-05-19 14:56:08 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:08.873137 | orchestrator | 2025-05-19 14:56:08 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:08.875238 | orchestrator | 2025-05-19 14:56:08 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:08.877796 | orchestrator | 2025-05-19 14:56:08 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:08.877825 | orchestrator | 2025-05-19 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:11.915630 | orchestrator | 2025-05-19 14:56:11 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:11.918253 | orchestrator | 2025-05-19 14:56:11 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:11.920863 | orchestrator | 2025-05-19 14:56:11 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:11.922264 | orchestrator | 2025-05-19 14:56:11 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:11.922302 | orchestrator | 2025-05-19 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:14.968686 | orchestrator | 2025-05-19 14:56:14 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:14.969298 | orchestrator | 2025-05-19 14:56:14 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:14.971263 | orchestrator | 2025-05-19 14:56:14 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:14.971304 | orchestrator | 2025-05-19 14:56:14 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:14.971325 | orchestrator | 2025-05-19 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:18.041642 | orchestrator | 2025-05-19 14:56:18 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:18.043166 | orchestrator | 2025-05-19 14:56:18 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:18.045134 | orchestrator | 2025-05-19 14:56:18 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:18.050568 | orchestrator | 2025-05-19 14:56:18 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:18.050608 | orchestrator | 2025-05-19 14:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:21.090889 | orchestrator | 2025-05-19 14:56:21 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:21.092762 | orchestrator | 2025-05-19 14:56:21 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:21.094938 | orchestrator | 2025-05-19 14:56:21 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:21.096953 | orchestrator | 2025-05-19 14:56:21 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:21.096987 | orchestrator | 2025-05-19 14:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:24.142344 | orchestrator | 2025-05-19 14:56:24 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:24.143498 | orchestrator | 2025-05-19 14:56:24 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:24.143532 | orchestrator | 2025-05-19 14:56:24 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:24.144483 | orchestrator | 2025-05-19 14:56:24 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:24.144506 | orchestrator | 2025-05-19 14:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:27.192697 | orchestrator | 2025-05-19 14:56:27 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:27.193078 | orchestrator | 2025-05-19 14:56:27 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:27.194830 | orchestrator | 2025-05-19 14:56:27 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:27.196223 | orchestrator | 2025-05-19 14:56:27 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:27.196255 | orchestrator | 2025-05-19 14:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:30.246007 | orchestrator | 2025-05-19 14:56:30 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:30.246196 | orchestrator | 2025-05-19 14:56:30 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:30.246211 | orchestrator | 2025-05-19 14:56:30 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:30.246223 | orchestrator | 2025-05-19 14:56:30 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:30.246234 | orchestrator | 2025-05-19 14:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:33.285469 | orchestrator | 2025-05-19 14:56:33 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:33.287452 | orchestrator | 2025-05-19 14:56:33 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:33.289881 | orchestrator | 2025-05-19 14:56:33 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:33.291984 | orchestrator | 2025-05-19 14:56:33 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:33.292011 | orchestrator | 2025-05-19 14:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:36.340794 | orchestrator | 2025-05-19 14:56:36 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:36.341181 | orchestrator | 2025-05-19 14:56:36 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:36.341971 | orchestrator | 2025-05-19 14:56:36 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:36.342652 | orchestrator | 2025-05-19 14:56:36 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:36.343426 | orchestrator | 2025-05-19 14:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:39.386471 | orchestrator | 2025-05-19 14:56:39 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:39.387806 | orchestrator | 2025-05-19 14:56:39 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:39.389692 | orchestrator | 2025-05-19 14:56:39 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:39.391529 | orchestrator | 2025-05-19 14:56:39 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:39.391596 | orchestrator | 2025-05-19 14:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:42.436921 | orchestrator | 2025-05-19 14:56:42 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:42.439179 | orchestrator | 2025-05-19 14:56:42 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:42.441411 | orchestrator | 2025-05-19 14:56:42 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:42.446985 | orchestrator | 2025-05-19 14:56:42 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:42.447523 | orchestrator | 2025-05-19 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:45.498181 | orchestrator | 2025-05-19 14:56:45 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:45.500444 | orchestrator | 2025-05-19 14:56:45 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:45.503242 | orchestrator | 2025-05-19 14:56:45 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:45.504314 | orchestrator | 2025-05-19 14:56:45 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:45.504567 | orchestrator | 2025-05-19 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:48.557846 | orchestrator | 2025-05-19 14:56:48 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:48.559710 | orchestrator | 2025-05-19 14:56:48 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:48.561290 | orchestrator | 2025-05-19 14:56:48 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:48.563144 | orchestrator | 2025-05-19 14:56:48 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:48.563171 | orchestrator | 2025-05-19 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:51.627387 | orchestrator | 2025-05-19 14:56:51 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:51.628279 | orchestrator | 2025-05-19 14:56:51 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:51.628315 | orchestrator | 2025-05-19 14:56:51 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:51.629094 | orchestrator | 2025-05-19 14:56:51 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:51.629120 | orchestrator | 2025-05-19 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:54.680636 | orchestrator | 2025-05-19 14:56:54 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:54.684853 | orchestrator | 2025-05-19 14:56:54 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:54.686807 | orchestrator | 2025-05-19 14:56:54 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:54.688492 | orchestrator | 2025-05-19 14:56:54 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:54.688539 | orchestrator | 2025-05-19 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:56:57.739348 | orchestrator | 2025-05-19 14:56:57 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:56:57.739461 | orchestrator | 2025-05-19 14:56:57 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:56:57.740366 | orchestrator | 2025-05-19 14:56:57 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:56:57.740419 | orchestrator | 2025-05-19 14:56:57 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:56:57.740432 | orchestrator | 2025-05-19 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:00.776919 | orchestrator | 2025-05-19 14:57:00 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:00.780580 | orchestrator | 2025-05-19 14:57:00 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:00.780626 | orchestrator | 2025-05-19 14:57:00 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:00.781182 | orchestrator | 2025-05-19 14:57:00 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:00.781389 | orchestrator | 2025-05-19 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:03.819836 | orchestrator | 2025-05-19 14:57:03 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:03.820761 | orchestrator | 2025-05-19 14:57:03 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:03.823985 | orchestrator | 2025-05-19 14:57:03 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:03.826829 | orchestrator | 2025-05-19 14:57:03 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:03.826873 | orchestrator | 2025-05-19 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:06.871680 | orchestrator | 2025-05-19 14:57:06 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:06.873980 | orchestrator | 2025-05-19 14:57:06 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:06.876231 | orchestrator | 2025-05-19 14:57:06 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:06.878156 | orchestrator | 2025-05-19 14:57:06 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:06.878183 | orchestrator | 2025-05-19 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:09.922952 | orchestrator | 2025-05-19 14:57:09 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:09.924321 | orchestrator | 2025-05-19 14:57:09 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:09.925962 | orchestrator | 2025-05-19 14:57:09 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:09.927499 | orchestrator | 2025-05-19 14:57:09 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:09.927527 | orchestrator | 2025-05-19 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:12.982139 | orchestrator | 2025-05-19 14:57:12 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:12.983683 | orchestrator | 2025-05-19 14:57:12 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:12.983715 | orchestrator | 2025-05-19 14:57:12 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:12.984871 | orchestrator | 2025-05-19 14:57:12 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:12.984892 | orchestrator | 2025-05-19 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:16.041155 | orchestrator | 2025-05-19 14:57:16 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:16.042525 | orchestrator | 2025-05-19 14:57:16 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:16.044049 | orchestrator | 2025-05-19 14:57:16 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:16.045876 | orchestrator | 2025-05-19 14:57:16 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:16.045913 | orchestrator | 2025-05-19 14:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:19.093824 | orchestrator | 2025-05-19 14:57:19 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:19.094179 | orchestrator | 2025-05-19 14:57:19 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:19.095299 | orchestrator | 2025-05-19 14:57:19 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:19.098397 | orchestrator | 2025-05-19 14:57:19 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:19.098427 | orchestrator | 2025-05-19 14:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:22.138344 | orchestrator | 2025-05-19 14:57:22 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:22.138849 | orchestrator | 2025-05-19 14:57:22 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:22.139749 | orchestrator | 2025-05-19 14:57:22 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:22.141687 | orchestrator | 2025-05-19 14:57:22 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:22.141864 | orchestrator | 2025-05-19 14:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:25.169740 | orchestrator | 2025-05-19 14:57:25 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:25.169812 | orchestrator | 2025-05-19 14:57:25 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:25.169983 | orchestrator | 2025-05-19 14:57:25 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:25.171964 | orchestrator | 2025-05-19 14:57:25 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state STARTED 2025-05-19 14:57:25.171998 | orchestrator | 2025-05-19 14:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:28.205252 | orchestrator | 2025-05-19 14:57:28 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:28.205454 | orchestrator | 2025-05-19 14:57:28 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:28.206171 | orchestrator | 2025-05-19 14:57:28 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:28.207289 | orchestrator | 2025-05-19 14:57:28 | INFO  | Task 459de843-74c1-4ceb-8157-c2b3d0765b2b is in state SUCCESS 2025-05-19 14:57:28.208602 | orchestrator | 2025-05-19 14:57:28.208695 | orchestrator | 2025-05-19 14:57:28.208732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:57:28.208744 | orchestrator | 2025-05-19 14:57:28.208755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:57:28.208766 | orchestrator | Monday 19 May 2025 14:55:58 +0000 (0:00:00.224) 0:00:00.224 ************ 2025-05-19 14:57:28.208777 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:57:28.208789 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:57:28.208800 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:57:28.208810 | orchestrator | 2025-05-19 14:57:28.208821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:57:28.208853 | orchestrator | Monday 19 May 2025 14:55:58 +0000 (0:00:00.647) 0:00:00.871 ************ 2025-05-19 14:57:28.208865 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 14:57:28.208876 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 14:57:28.208887 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 14:57:28.208898 | orchestrator | 2025-05-19 14:57:28.208908 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-19 14:57:28.208920 | orchestrator | 2025-05-19 14:57:28.208932 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-19 14:57:28.208942 | orchestrator | Monday 19 May 2025 14:55:59 +0000 (0:00:00.835) 0:00:01.707 ************ 2025-05-19 14:57:28.208953 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:57:28.208964 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:57:28.208974 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:57:28.208985 | orchestrator | 2025-05-19 14:57:28.208996 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:57:28.209034 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:57:28.209048 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:57:28.209059 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:57:28.209070 | orchestrator | 2025-05-19 14:57:28.209081 | orchestrator | 2025-05-19 14:57:28.209092 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:57:28.209103 | orchestrator | Monday 19 May 2025 14:56:00 +0000 (0:00:00.749) 0:00:02.456 ************ 2025-05-19 14:57:28.209113 | orchestrator | =============================================================================== 2025-05-19 14:57:28.209124 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-19 14:57:28.209135 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.75s 2025-05-19 14:57:28.209145 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-05-19 14:57:28.209156 | orchestrator | 2025-05-19 14:57:28.209167 | orchestrator | 2025-05-19 14:57:28.209178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:57:28.209190 | orchestrator | 2025-05-19 14:57:28.209203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:57:28.209215 | orchestrator | Monday 19 May 2025 14:55:33 +0000 (0:00:00.270) 0:00:00.270 ************ 2025-05-19 14:57:28.209227 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:57:28.209239 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:57:28.209251 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:57:28.209264 | orchestrator | 2025-05-19 14:57:28.209276 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:57:28.209288 | orchestrator | Monday 19 May 2025 14:55:33 +0000 (0:00:00.322) 0:00:00.592 ************ 2025-05-19 14:57:28.209300 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-19 14:57:28.209313 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-19 14:57:28.209325 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-19 14:57:28.209338 | orchestrator | 2025-05-19 14:57:28.209349 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-19 14:57:28.209361 | orchestrator | 2025-05-19 14:57:28.209373 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 14:57:28.209386 | orchestrator | Monday 19 May 2025 14:55:34 +0000 (0:00:00.326) 0:00:00.919 ************ 2025-05-19 14:57:28.209411 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:57:28.209423 | orchestrator | 2025-05-19 14:57:28.209436 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-19 14:57:28.209456 | orchestrator | Monday 19 May 2025 14:55:34 +0000 (0:00:00.416) 0:00:01.335 ************ 2025-05-19 14:57:28.209469 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-19 14:57:28.209481 | orchestrator | 2025-05-19 14:57:28.209493 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-19 14:57:28.209505 | orchestrator | Monday 19 May 2025 14:55:38 +0000 (0:00:03.638) 0:00:04.973 ************ 2025-05-19 14:57:28.209517 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-19 14:57:28.209531 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-19 14:57:28.209543 | orchestrator | 2025-05-19 14:57:28.209554 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-19 14:57:28.209564 | orchestrator | Monday 19 May 2025 14:55:44 +0000 (0:00:06.471) 0:00:11.445 ************ 2025-05-19 14:57:28.209576 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:57:28.209586 | orchestrator | 2025-05-19 14:57:28.209597 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-19 14:57:28.209608 | orchestrator | Monday 19 May 2025 14:55:47 +0000 (0:00:02.688) 0:00:14.133 ************ 2025-05-19 14:57:28.209632 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:57:28.209644 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-19 14:57:28.209655 | orchestrator | 2025-05-19 14:57:28.209665 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-19 14:57:28.209676 | orchestrator | Monday 19 May 2025 14:55:50 +0000 (0:00:03.514) 0:00:17.648 ************ 2025-05-19 14:57:28.209686 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:57:28.209697 | orchestrator | 2025-05-19 14:57:28.209708 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-19 14:57:28.209718 | orchestrator | Monday 19 May 2025 14:55:54 +0000 (0:00:03.275) 0:00:20.923 ************ 2025-05-19 14:57:28.209729 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-19 14:57:28.209739 | orchestrator | 2025-05-19 14:57:28.209750 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-19 14:57:28.209760 | orchestrator | Monday 19 May 2025 14:55:58 +0000 (0:00:04.078) 0:00:25.002 ************ 2025-05-19 14:57:28.209771 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.209782 | orchestrator | 2025-05-19 14:57:28.209792 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-19 14:57:28.209804 | orchestrator | Monday 19 May 2025 14:56:01 +0000 (0:00:03.460) 0:00:28.463 ************ 2025-05-19 14:57:28.209824 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.209836 | orchestrator | 2025-05-19 14:57:28.209847 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-19 14:57:28.209857 | orchestrator | Monday 19 May 2025 14:56:05 +0000 (0:00:03.899) 0:00:32.362 ************ 2025-05-19 14:57:28.209868 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.209879 | orchestrator | 2025-05-19 14:57:28.209890 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-19 14:57:28.209900 | orchestrator | Monday 19 May 2025 14:56:09 +0000 (0:00:03.817) 0:00:36.179 ************ 2025-05-19 14:57:28.209914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.209944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.209956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.209976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.209988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210092 | orchestrator | 2025-05-19 14:57:28.210105 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-19 14:57:28.210116 | orchestrator | Monday 19 May 2025 14:56:10 +0000 (0:00:01.407) 0:00:37.587 ************ 2025-05-19 14:57:28.210127 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.210138 | orchestrator | 2025-05-19 14:57:28.210149 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-19 14:57:28.210159 | orchestrator | Monday 19 May 2025 14:56:11 +0000 (0:00:00.119) 0:00:37.706 ************ 2025-05-19 14:57:28.210180 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.210206 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:57:28.210231 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:57:28.210250 | orchestrator | 2025-05-19 14:57:28.210271 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-19 14:57:28.210291 | orchestrator | Monday 19 May 2025 14:56:11 +0000 (0:00:00.472) 0:00:38.179 ************ 2025-05-19 14:57:28.210310 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:57:28.210330 | orchestrator | 2025-05-19 14:57:28.210350 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-19 14:57:28.210370 | orchestrator | Monday 19 May 2025 14:56:12 +0000 (0:00:00.906) 0:00:39.085 ************ 2025-05-19 14:57:28.210399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210561 | orchestrator | 2025-05-19 14:57:28.210580 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-19 14:57:28.210600 | orchestrator | Monday 19 May 2025 14:56:14 +0000 (0:00:02.410) 0:00:41.496 ************ 2025-05-19 14:57:28.210619 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:57:28.210639 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:57:28.210661 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:57:28.210680 | orchestrator | 2025-05-19 14:57:28.210698 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 14:57:28.210716 | orchestrator | Monday 19 May 2025 14:56:15 +0000 (0:00:00.288) 0:00:41.785 ************ 2025-05-19 14:57:28.210727 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:57:28.210738 | orchestrator | 2025-05-19 14:57:28.210749 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-19 14:57:28.210759 | orchestrator | Monday 19 May 2025 14:56:15 +0000 (0:00:00.818) 0:00:42.603 ************ 2025-05-19 14:57:28.210771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.210819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.210868 | orchestrator | 2025-05-19 14:57:28.210879 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-19 14:57:28.210890 | orchestrator | Monday 19 May 2025 14:56:18 +0000 (0:00:02.420) 0:00:45.024 ************ 2025-05-19 14:57:28.210901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.210913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.210924 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.210940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.210959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.210977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:57:28.210988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211033 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:57:28.211045 | orchestrator | 2025-05-19 14:57:28.211056 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-19 14:57:28.211067 | orchestrator | Monday 19 May 2025 14:56:18 +0000 (0:00:00.642) 0:00:45.666 ************ 2025-05-19 14:57:28.211088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211112 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.211130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211189 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:57:28.211200 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:57:28.211211 | orchestrator | 2025-05-19 14:57:28.211222 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-19 14:57:28.211232 | orchestrator | Monday 19 May 2025 14:56:20 +0000 (0:00:01.142) 0:00:46.808 ************ 2025-05-19 14:57:28.211250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211344 | orchestrator | 2025-05-19 14:57:28.211355 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-19 14:57:28.211367 | orchestrator | Monday 19 May 2025 14:56:22 +0000 (0:00:02.384) 0:00:49.193 ************ 2025-05-19 14:57:28.211378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211465 | orchestrator | 2025-05-19 14:57:28.211477 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-19 14:57:28.211487 | orchestrator | Monday 19 May 2025 14:56:27 +0000 (0:00:05.025) 0:00:54.218 ************ 2025-05-19 14:57:28.211499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211526 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.211538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211575 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:57:28.211586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 14:57:28.211598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:57:28.211609 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:57:28.211620 | orchestrator | 2025-05-19 14:57:28.211631 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-19 14:57:28.211642 | orchestrator | Monday 19 May 2025 14:56:28 +0000 (0:00:00.804) 0:00:55.023 ************ 2025-05-19 14:57:28.211657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 14:57:28.211704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:57:28.211748 | orchestrator | 2025-05-19 14:57:28.211759 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 14:57:28.211770 | orchestrator | Monday 19 May 2025 14:56:30 +0000 (0:00:02.323) 0:00:57.347 ************ 2025-05-19 14:57:28.211781 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:57:28.211792 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:57:28.211803 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:57:28.211814 | orchestrator | 2025-05-19 14:57:28.211825 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-19 14:57:28.211836 | orchestrator | Monday 19 May 2025 14:56:31 +0000 (0:00:00.345) 0:00:57.692 ************ 2025-05-19 14:57:28.211847 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.211857 | orchestrator | 2025-05-19 14:57:28.211868 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-19 14:57:28.211879 | orchestrator | Monday 19 May 2025 14:56:33 +0000 (0:00:02.034) 0:00:59.727 ************ 2025-05-19 14:57:28.211890 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.211900 | orchestrator | 2025-05-19 14:57:28.211911 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-19 14:57:28.211922 | orchestrator | Monday 19 May 2025 14:56:35 +0000 (0:00:02.192) 0:01:01.920 ************ 2025-05-19 14:57:28.211938 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.211949 | orchestrator | 2025-05-19 14:57:28.211960 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 14:57:28.211971 | orchestrator | Monday 19 May 2025 14:56:50 +0000 (0:00:14.939) 0:01:16.859 ************ 2025-05-19 14:57:28.211981 | orchestrator | 2025-05-19 14:57:28.211992 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 14:57:28.212003 | orchestrator | Monday 19 May 2025 14:56:50 +0000 (0:00:00.060) 0:01:16.919 ************ 2025-05-19 14:57:28.212060 | orchestrator | 2025-05-19 14:57:28.212071 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 14:57:28.212082 | orchestrator | Monday 19 May 2025 14:56:50 +0000 (0:00:00.057) 0:01:16.977 ************ 2025-05-19 14:57:28.212093 | orchestrator | 2025-05-19 14:57:28.212104 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-19 14:57:28.212115 | orchestrator | Monday 19 May 2025 14:56:50 +0000 (0:00:00.064) 0:01:17.042 ************ 2025-05-19 14:57:28.212125 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.212136 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:57:28.212147 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:57:28.212158 | orchestrator | 2025-05-19 14:57:28.212168 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-19 14:57:28.212179 | orchestrator | Monday 19 May 2025 14:57:11 +0000 (0:00:21.003) 0:01:38.045 ************ 2025-05-19 14:57:28.212190 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:57:28.212200 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:57:28.212211 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:57:28.212222 | orchestrator | 2025-05-19 14:57:28.212232 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:57:28.212241 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 14:57:28.212251 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:57:28.212261 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 14:57:28.212277 | orchestrator | 2025-05-19 14:57:28.212286 | orchestrator | 2025-05-19 14:57:28.212296 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:57:28.212305 | orchestrator | Monday 19 May 2025 14:57:26 +0000 (0:00:15.046) 0:01:53.092 ************ 2025-05-19 14:57:28.212315 | orchestrator | =============================================================================== 2025-05-19 14:57:28.212324 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.00s 2025-05-19 14:57:28.212334 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.05s 2025-05-19 14:57:28.212343 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.94s 2025-05-19 14:57:28.212353 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.47s 2025-05-19 14:57:28.212362 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.03s 2025-05-19 14:57:28.212372 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.08s 2025-05-19 14:57:28.212381 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.90s 2025-05-19 14:57:28.212391 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.82s 2025-05-19 14:57:28.212400 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.64s 2025-05-19 14:57:28.212410 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.51s 2025-05-19 14:57:28.212419 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.46s 2025-05-19 14:57:28.212429 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.28s 2025-05-19 14:57:28.212438 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.69s 2025-05-19 14:57:28.212448 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.42s 2025-05-19 14:57:28.212464 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.41s 2025-05-19 14:57:28.212474 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.38s 2025-05-19 14:57:28.212484 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.32s 2025-05-19 14:57:28.212493 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.19s 2025-05-19 14:57:28.212503 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.03s 2025-05-19 14:57:28.212512 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.41s 2025-05-19 14:57:28.212522 | orchestrator | 2025-05-19 14:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:31.241200 | orchestrator | 2025-05-19 14:57:31 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:31.242162 | orchestrator | 2025-05-19 14:57:31 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:31.243863 | orchestrator | 2025-05-19 14:57:31 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:31.243895 | orchestrator | 2025-05-19 14:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:34.291651 | orchestrator | 2025-05-19 14:57:34 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:34.293403 | orchestrator | 2025-05-19 14:57:34 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:34.294506 | orchestrator | 2025-05-19 14:57:34 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:34.294646 | orchestrator | 2025-05-19 14:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:37.346648 | orchestrator | 2025-05-19 14:57:37 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:37.350672 | orchestrator | 2025-05-19 14:57:37 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:37.352849 | orchestrator | 2025-05-19 14:57:37 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:37.352906 | orchestrator | 2025-05-19 14:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:40.402233 | orchestrator | 2025-05-19 14:57:40 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:40.402955 | orchestrator | 2025-05-19 14:57:40 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:40.405969 | orchestrator | 2025-05-19 14:57:40 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:40.406309 | orchestrator | 2025-05-19 14:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:43.471900 | orchestrator | 2025-05-19 14:57:43 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:43.475389 | orchestrator | 2025-05-19 14:57:43 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:43.477334 | orchestrator | 2025-05-19 14:57:43 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:43.477382 | orchestrator | 2025-05-19 14:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:46.534690 | orchestrator | 2025-05-19 14:57:46 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:46.535836 | orchestrator | 2025-05-19 14:57:46 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:46.537291 | orchestrator | 2025-05-19 14:57:46 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:46.537419 | orchestrator | 2025-05-19 14:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:49.597448 | orchestrator | 2025-05-19 14:57:49 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:49.599546 | orchestrator | 2025-05-19 14:57:49 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:49.601682 | orchestrator | 2025-05-19 14:57:49 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:49.601706 | orchestrator | 2025-05-19 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:52.656474 | orchestrator | 2025-05-19 14:57:52 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:52.658991 | orchestrator | 2025-05-19 14:57:52 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:52.661096 | orchestrator | 2025-05-19 14:57:52 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:52.661379 | orchestrator | 2025-05-19 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:55.707870 | orchestrator | 2025-05-19 14:57:55 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:55.709187 | orchestrator | 2025-05-19 14:57:55 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:55.713576 | orchestrator | 2025-05-19 14:57:55 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:55.713637 | orchestrator | 2025-05-19 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:57:58.766603 | orchestrator | 2025-05-19 14:57:58 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state STARTED 2025-05-19 14:57:58.769465 | orchestrator | 2025-05-19 14:57:58 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:57:58.771536 | orchestrator | 2025-05-19 14:57:58 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:57:58.771584 | orchestrator | 2025-05-19 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:01.823053 | orchestrator | 2025-05-19 14:58:01 | INFO  | Task f1a37332-2342-4416-b799-9db1b8d29db6 is in state SUCCESS 2025-05-19 14:58:01.825877 | orchestrator | 2025-05-19 14:58:01.826186 | orchestrator | 2025-05-19 14:58:01.826212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:58:01.826225 | orchestrator | 2025-05-19 14:58:01.826236 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-19 14:58:01.826248 | orchestrator | Monday 19 May 2025 14:49:22 +0000 (0:00:00.227) 0:00:00.227 ************ 2025-05-19 14:58:01.826259 | orchestrator | changed: [testbed-manager] 2025-05-19 14:58:01.826272 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.826283 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.826294 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.826304 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.826315 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.826326 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.826337 | orchestrator | 2025-05-19 14:58:01.826348 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:58:01.826358 | orchestrator | Monday 19 May 2025 14:49:23 +0000 (0:00:01.432) 0:00:01.659 ************ 2025-05-19 14:58:01.826370 | orchestrator | changed: [testbed-manager] 2025-05-19 14:58:01.826382 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.826394 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.826407 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.826419 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.826431 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.826443 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.826455 | orchestrator | 2025-05-19 14:58:01.826468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:58:01.826534 | orchestrator | Monday 19 May 2025 14:49:24 +0000 (0:00:01.355) 0:00:03.015 ************ 2025-05-19 14:58:01.826545 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-19 14:58:01.826556 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 14:58:01.826567 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 14:58:01.826578 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 14:58:01.826588 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-19 14:58:01.826599 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-19 14:58:01.826610 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-19 14:58:01.826629 | orchestrator | 2025-05-19 14:58:01.826649 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-19 14:58:01.826668 | orchestrator | 2025-05-19 14:58:01.826687 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 14:58:01.826706 | orchestrator | Monday 19 May 2025 14:49:25 +0000 (0:00:01.057) 0:00:04.072 ************ 2025-05-19 14:58:01.826725 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.826744 | orchestrator | 2025-05-19 14:58:01.826764 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-19 14:58:01.826782 | orchestrator | Monday 19 May 2025 14:49:26 +0000 (0:00:01.086) 0:00:05.159 ************ 2025-05-19 14:58:01.826794 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-19 14:58:01.826806 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-19 14:58:01.826816 | orchestrator | 2025-05-19 14:58:01.826827 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-19 14:58:01.826838 | orchestrator | Monday 19 May 2025 14:49:30 +0000 (0:00:03.713) 0:00:08.872 ************ 2025-05-19 14:58:01.826873 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:58:01.826885 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 14:58:01.826896 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.826907 | orchestrator | 2025-05-19 14:58:01.826917 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 14:58:01.826928 | orchestrator | Monday 19 May 2025 14:49:34 +0000 (0:00:03.417) 0:00:12.289 ************ 2025-05-19 14:58:01.826939 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.826949 | orchestrator | 2025-05-19 14:58:01.826960 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-19 14:58:01.826971 | orchestrator | Monday 19 May 2025 14:49:34 +0000 (0:00:00.588) 0:00:12.878 ************ 2025-05-19 14:58:01.826981 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.826992 | orchestrator | 2025-05-19 14:58:01.827029 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-19 14:58:01.827056 | orchestrator | Monday 19 May 2025 14:49:36 +0000 (0:00:01.363) 0:00:14.241 ************ 2025-05-19 14:58:01.827067 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.827078 | orchestrator | 2025-05-19 14:58:01.827089 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 14:58:01.827099 | orchestrator | Monday 19 May 2025 14:49:38 +0000 (0:00:02.771) 0:00:17.013 ************ 2025-05-19 14:58:01.827110 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.827120 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827131 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827142 | orchestrator | 2025-05-19 14:58:01.827152 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 14:58:01.827163 | orchestrator | Monday 19 May 2025 14:49:39 +0000 (0:00:00.464) 0:00:17.478 ************ 2025-05-19 14:58:01.827174 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.827185 | orchestrator | 2025-05-19 14:58:01.827196 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-19 14:58:01.827207 | orchestrator | Monday 19 May 2025 14:50:06 +0000 (0:00:26.762) 0:00:44.240 ************ 2025-05-19 14:58:01.827218 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.827228 | orchestrator | 2025-05-19 14:58:01.827239 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 14:58:01.827250 | orchestrator | Monday 19 May 2025 14:50:17 +0000 (0:00:11.115) 0:00:55.356 ************ 2025-05-19 14:58:01.827261 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.827272 | orchestrator | 2025-05-19 14:58:01.827282 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 14:58:01.827293 | orchestrator | Monday 19 May 2025 14:50:26 +0000 (0:00:09.388) 0:01:04.744 ************ 2025-05-19 14:58:01.827324 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.827336 | orchestrator | 2025-05-19 14:58:01.827347 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-19 14:58:01.827358 | orchestrator | Monday 19 May 2025 14:50:27 +0000 (0:00:01.118) 0:01:05.863 ************ 2025-05-19 14:58:01.827369 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.827379 | orchestrator | 2025-05-19 14:58:01.827390 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 14:58:01.827401 | orchestrator | Monday 19 May 2025 14:50:28 +0000 (0:00:00.506) 0:01:06.370 ************ 2025-05-19 14:58:01.827413 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.827424 | orchestrator | 2025-05-19 14:58:01.827434 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 14:58:01.827445 | orchestrator | Monday 19 May 2025 14:50:28 +0000 (0:00:00.504) 0:01:06.874 ************ 2025-05-19 14:58:01.827456 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.827466 | orchestrator | 2025-05-19 14:58:01.827477 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 14:58:01.827488 | orchestrator | Monday 19 May 2025 14:50:45 +0000 (0:00:16.923) 0:01:23.797 ************ 2025-05-19 14:58:01.827512 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.827524 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827534 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827545 | orchestrator | 2025-05-19 14:58:01.827556 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-19 14:58:01.827566 | orchestrator | 2025-05-19 14:58:01.827577 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 14:58:01.827587 | orchestrator | Monday 19 May 2025 14:50:45 +0000 (0:00:00.297) 0:01:24.095 ************ 2025-05-19 14:58:01.827598 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.827609 | orchestrator | 2025-05-19 14:58:01.827620 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-19 14:58:01.827630 | orchestrator | Monday 19 May 2025 14:50:46 +0000 (0:00:00.540) 0:01:24.635 ************ 2025-05-19 14:58:01.827641 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827652 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827663 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.827674 | orchestrator | 2025-05-19 14:58:01.827685 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-19 14:58:01.827695 | orchestrator | Monday 19 May 2025 14:50:48 +0000 (0:00:01.994) 0:01:26.630 ************ 2025-05-19 14:58:01.827706 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827728 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.827739 | orchestrator | 2025-05-19 14:58:01.827750 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 14:58:01.827761 | orchestrator | Monday 19 May 2025 14:50:50 +0000 (0:00:01.975) 0:01:28.606 ************ 2025-05-19 14:58:01.827778 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.827796 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827815 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827834 | orchestrator | 2025-05-19 14:58:01.827851 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 14:58:01.827869 | orchestrator | Monday 19 May 2025 14:50:50 +0000 (0:00:00.295) 0:01:28.902 ************ 2025-05-19 14:58:01.827887 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 14:58:01.827906 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.827922 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 14:58:01.827933 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.827946 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 14:58:01.827964 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-19 14:58:01.827981 | orchestrator | 2025-05-19 14:58:01.827996 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 14:58:01.828047 | orchestrator | Monday 19 May 2025 14:50:58 +0000 (0:00:07.803) 0:01:36.705 ************ 2025-05-19 14:58:01.828067 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.828085 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828104 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828115 | orchestrator | 2025-05-19 14:58:01.828126 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 14:58:01.828145 | orchestrator | Monday 19 May 2025 14:50:58 +0000 (0:00:00.359) 0:01:37.065 ************ 2025-05-19 14:58:01.828156 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 14:58:01.828166 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.828177 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 14:58:01.828187 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828198 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 14:58:01.828208 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828219 | orchestrator | 2025-05-19 14:58:01.828229 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 14:58:01.828249 | orchestrator | Monday 19 May 2025 14:50:59 +0000 (0:00:00.786) 0:01:37.852 ************ 2025-05-19 14:58:01.828260 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828270 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828281 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.828291 | orchestrator | 2025-05-19 14:58:01.828302 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-19 14:58:01.828313 | orchestrator | Monday 19 May 2025 14:51:00 +0000 (0:00:00.457) 0:01:38.309 ************ 2025-05-19 14:58:01.828324 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828334 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828345 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.828356 | orchestrator | 2025-05-19 14:58:01.828367 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-19 14:58:01.828378 | orchestrator | Monday 19 May 2025 14:51:00 +0000 (0:00:00.872) 0:01:39.181 ************ 2025-05-19 14:58:01.828388 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828399 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828421 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.828432 | orchestrator | 2025-05-19 14:58:01.828448 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-19 14:58:01.828466 | orchestrator | Monday 19 May 2025 14:51:03 +0000 (0:00:02.832) 0:01:42.013 ************ 2025-05-19 14:58:01.828477 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828488 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828499 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.828510 | orchestrator | 2025-05-19 14:58:01.828521 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 14:58:01.828531 | orchestrator | Monday 19 May 2025 14:51:25 +0000 (0:00:21.409) 0:02:03.423 ************ 2025-05-19 14:58:01.828542 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828553 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828564 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.828574 | orchestrator | 2025-05-19 14:58:01.828585 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 14:58:01.828596 | orchestrator | Monday 19 May 2025 14:51:36 +0000 (0:00:11.170) 0:02:14.593 ************ 2025-05-19 14:58:01.828607 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.828618 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828628 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828639 | orchestrator | 2025-05-19 14:58:01.828650 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-19 14:58:01.828661 | orchestrator | Monday 19 May 2025 14:51:37 +0000 (0:00:01.348) 0:02:15.941 ************ 2025-05-19 14:58:01.828671 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828682 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828693 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.828704 | orchestrator | 2025-05-19 14:58:01.828715 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-19 14:58:01.828726 | orchestrator | Monday 19 May 2025 14:51:49 +0000 (0:00:11.434) 0:02:27.376 ************ 2025-05-19 14:58:01.828736 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.828747 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828758 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828768 | orchestrator | 2025-05-19 14:58:01.828779 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 14:58:01.828790 | orchestrator | Monday 19 May 2025 14:51:51 +0000 (0:00:01.825) 0:02:29.202 ************ 2025-05-19 14:58:01.828801 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.828812 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.828822 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.828833 | orchestrator | 2025-05-19 14:58:01.828844 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-19 14:58:01.828855 | orchestrator | 2025-05-19 14:58:01.828872 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 14:58:01.828883 | orchestrator | Monday 19 May 2025 14:51:52 +0000 (0:00:01.020) 0:02:30.222 ************ 2025-05-19 14:58:01.828894 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.828906 | orchestrator | 2025-05-19 14:58:01.828922 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-19 14:58:01.828940 | orchestrator | Monday 19 May 2025 14:51:53 +0000 (0:00:01.118) 0:02:31.341 ************ 2025-05-19 14:58:01.828959 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-19 14:58:01.828976 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-19 14:58:01.828995 | orchestrator | 2025-05-19 14:58:01.829089 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-19 14:58:01.829109 | orchestrator | Monday 19 May 2025 14:51:56 +0000 (0:00:03.111) 0:02:34.452 ************ 2025-05-19 14:58:01.829120 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-19 14:58:01.829133 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-19 14:58:01.829145 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-19 14:58:01.829163 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-19 14:58:01.829175 | orchestrator | 2025-05-19 14:58:01.829186 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-19 14:58:01.829197 | orchestrator | Monday 19 May 2025 14:52:02 +0000 (0:00:06.283) 0:02:40.735 ************ 2025-05-19 14:58:01.829207 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 14:58:01.829218 | orchestrator | 2025-05-19 14:58:01.829229 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-19 14:58:01.829239 | orchestrator | Monday 19 May 2025 14:52:05 +0000 (0:00:03.215) 0:02:43.950 ************ 2025-05-19 14:58:01.829250 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 14:58:01.829260 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-19 14:58:01.829271 | orchestrator | 2025-05-19 14:58:01.829281 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-19 14:58:01.829290 | orchestrator | Monday 19 May 2025 14:52:09 +0000 (0:00:03.860) 0:02:47.811 ************ 2025-05-19 14:58:01.829300 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 14:58:01.829309 | orchestrator | 2025-05-19 14:58:01.829319 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-19 14:58:01.829328 | orchestrator | Monday 19 May 2025 14:52:12 +0000 (0:00:03.293) 0:02:51.104 ************ 2025-05-19 14:58:01.829338 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-19 14:58:01.829347 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-19 14:58:01.829357 | orchestrator | 2025-05-19 14:58:01.829366 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 14:58:01.829384 | orchestrator | Monday 19 May 2025 14:52:20 +0000 (0:00:07.282) 0:02:58.387 ************ 2025-05-19 14:58:01.829400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829534 | orchestrator | 2025-05-19 14:58:01.829544 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-19 14:58:01.829553 | orchestrator | Monday 19 May 2025 14:52:21 +0000 (0:00:01.325) 0:02:59.713 ************ 2025-05-19 14:58:01.829563 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.829573 | orchestrator | 2025-05-19 14:58:01.829582 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-19 14:58:01.829592 | orchestrator | Monday 19 May 2025 14:52:21 +0000 (0:00:00.113) 0:02:59.826 ************ 2025-05-19 14:58:01.829601 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.829611 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.829621 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.829630 | orchestrator | 2025-05-19 14:58:01.829640 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-19 14:58:01.829649 | orchestrator | Monday 19 May 2025 14:52:22 +0000 (0:00:00.689) 0:03:00.515 ************ 2025-05-19 14:58:01.829658 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:58:01.829668 | orchestrator | 2025-05-19 14:58:01.829677 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-19 14:58:01.829687 | orchestrator | Monday 19 May 2025 14:52:23 +0000 (0:00:01.443) 0:03:01.959 ************ 2025-05-19 14:58:01.829696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.829706 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.829716 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.829725 | orchestrator | 2025-05-19 14:58:01.829735 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 14:58:01.829744 | orchestrator | Monday 19 May 2025 14:52:24 +0000 (0:00:00.284) 0:03:02.244 ************ 2025-05-19 14:58:01.829754 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.829763 | orchestrator | 2025-05-19 14:58:01.829773 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 14:58:01.829782 | orchestrator | Monday 19 May 2025 14:52:25 +0000 (0:00:01.074) 0:03:03.318 ************ 2025-05-19 14:58:01.829797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.829882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.829953 | orchestrator | 2025-05-19 14:58:01.829964 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 14:58:01.829973 | orchestrator | Monday 19 May 2025 14:52:27 +0000 (0:00:02.605) 0:03:05.924 ************ 2025-05-19 14:58:01.829984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.829995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.830312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830367 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.830384 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.830418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.830431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830441 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.830451 | orchestrator | 2025-05-19 14:58:01.830461 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 14:58:01.830470 | orchestrator | Monday 19 May 2025 14:52:28 +0000 (0:00:00.911) 0:03:06.836 ************ 2025-05-19 14:58:01.830486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.830497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830514 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.830533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.830545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830555 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.830565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.830581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.830597 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.830607 | orchestrator | 2025-05-19 14:58:01.830617 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-19 14:58:01.830627 | orchestrator | Monday 19 May 2025 14:52:29 +0000 (0:00:01.060) 0:03:07.897 ************ 2025-05-19 14:58:01.830688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830783 | orchestrator | 2025-05-19 14:58:01.830793 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-19 14:58:01.830803 | orchestrator | Monday 19 May 2025 14:52:32 +0000 (0:00:02.563) 0:03:10.460 ************ 2025-05-19 14:58:01.830854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.830935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.830966 | orchestrator | 2025-05-19 14:58:01.830975 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-19 14:58:01.830985 | orchestrator | Monday 19 May 2025 14:52:41 +0000 (0:00:09.268) 0:03:19.728 ************ 2025-05-19 14:58:01.831024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.831050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.831061 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.831072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.831082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 14:58:01.831104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.831115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.831125 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.831137 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.831154 | orchestrator | 2025-05-19 14:58:01.831170 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-19 14:58:01.831187 | orchestrator | Monday 19 May 2025 14:52:42 +0000 (0:00:01.009) 0:03:20.737 ************ 2025-05-19 14:58:01.831203 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.831219 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.831234 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.831250 | orchestrator | 2025-05-19 14:58:01.831276 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-19 14:58:01.831290 | orchestrator | Monday 19 May 2025 14:52:44 +0000 (0:00:02.069) 0:03:22.807 ************ 2025-05-19 14:58:01.831299 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.831309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.831319 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.831329 | orchestrator | 2025-05-19 14:58:01.831338 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-19 14:58:01.831348 | orchestrator | Monday 19 May 2025 14:52:45 +0000 (0:00:00.755) 0:03:23.562 ************ 2025-05-19 14:58:01.831358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.831378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.831405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 14:58:01.831439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.831450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.831460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.831477 | orchestrator | 2025-05-19 14:58:01.831487 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 14:58:01.831496 | orchestrator | Monday 19 May 2025 14:52:47 +0000 (0:00:01.906) 0:03:25.468 ************ 2025-05-19 14:58:01.831506 | orchestrator | 2025-05-19 14:58:01.831516 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 14:58:01.831526 | orchestrator | Monday 19 May 2025 14:52:47 +0000 (0:00:00.259) 0:03:25.727 ************ 2025-05-19 14:58:01.831535 | orchestrator | 2025-05-19 14:58:01.831545 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 14:58:01.831554 | orchestrator | Monday 19 May 2025 14:52:47 +0000 (0:00:00.298) 0:03:26.026 ************ 2025-05-19 14:58:01.831563 | orchestrator | 2025-05-19 14:58:01.831573 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-19 14:58:01.831582 | orchestrator | Monday 19 May 2025 14:52:48 +0000 (0:00:00.486) 0:03:26.512 ************ 2025-05-19 14:58:01.831592 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.831602 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.831611 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.831621 | orchestrator | 2025-05-19 14:58:01.831630 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-19 14:58:01.831639 | orchestrator | Monday 19 May 2025 14:53:07 +0000 (0:00:19.579) 0:03:46.092 ************ 2025-05-19 14:58:01.831649 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.831658 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.831668 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.831677 | orchestrator | 2025-05-19 14:58:01.831686 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-19 14:58:01.831696 | orchestrator | 2025-05-19 14:58:01.831710 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 14:58:01.831719 | orchestrator | Monday 19 May 2025 14:53:19 +0000 (0:00:11.559) 0:03:57.652 ************ 2025-05-19 14:58:01.831729 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.831809 | orchestrator | 2025-05-19 14:58:01.831820 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 14:58:01.831829 | orchestrator | Monday 19 May 2025 14:53:20 +0000 (0:00:01.119) 0:03:58.772 ************ 2025-05-19 14:58:01.831839 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.831849 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.831858 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.831868 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.831877 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.831887 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.831896 | orchestrator | 2025-05-19 14:58:01.831906 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-19 14:58:01.831915 | orchestrator | Monday 19 May 2025 14:53:21 +0000 (0:00:00.863) 0:03:59.636 ************ 2025-05-19 14:58:01.831925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.831934 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.831944 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.831953 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:58:01.831963 | orchestrator | 2025-05-19 14:58:01.831972 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 14:58:01.831988 | orchestrator | Monday 19 May 2025 14:53:22 +0000 (0:00:00.949) 0:04:00.585 ************ 2025-05-19 14:58:01.831998 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-19 14:58:01.832067 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-19 14:58:01.832086 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-19 14:58:01.832095 | orchestrator | 2025-05-19 14:58:01.832105 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 14:58:01.832115 | orchestrator | Monday 19 May 2025 14:53:23 +0000 (0:00:00.980) 0:04:01.566 ************ 2025-05-19 14:58:01.832124 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-19 14:58:01.832134 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-19 14:58:01.832144 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-19 14:58:01.832153 | orchestrator | 2025-05-19 14:58:01.832163 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 14:58:01.832173 | orchestrator | Monday 19 May 2025 14:53:24 +0000 (0:00:01.387) 0:04:02.953 ************ 2025-05-19 14:58:01.832182 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-19 14:58:01.832192 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.832201 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-19 14:58:01.832211 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.832220 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-19 14:58:01.832229 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.832239 | orchestrator | 2025-05-19 14:58:01.832249 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-19 14:58:01.832259 | orchestrator | Monday 19 May 2025 14:53:25 +0000 (0:00:00.747) 0:04:03.701 ************ 2025-05-19 14:58:01.832268 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:58:01.832278 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:58:01.832294 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.832311 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:58:01.832329 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:58:01.832346 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.832365 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 14:58:01.832376 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 14:58:01.832385 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.832395 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 14:58:01.832404 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 14:58:01.832414 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 14:58:01.832423 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 14:58:01.832432 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 14:58:01.832442 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 14:58:01.832451 | orchestrator | 2025-05-19 14:58:01.832460 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-19 14:58:01.832470 | orchestrator | Monday 19 May 2025 14:53:26 +0000 (0:00:01.065) 0:04:04.766 ************ 2025-05-19 14:58:01.832479 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.832489 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.832498 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.832508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.832517 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.832526 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.832536 | orchestrator | 2025-05-19 14:58:01.832545 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-19 14:58:01.832555 | orchestrator | Monday 19 May 2025 14:53:28 +0000 (0:00:02.155) 0:04:06.923 ************ 2025-05-19 14:58:01.832564 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.832580 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.832590 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.832604 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.832614 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.832623 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.832633 | orchestrator | 2025-05-19 14:58:01.832642 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 14:58:01.832652 | orchestrator | Monday 19 May 2025 14:53:31 +0000 (0:00:02.308) 0:04:09.231 ************ 2025-05-19 14:58:01.832663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832863 | orchestrator | 2025-05-19 14:58:01.832873 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 14:58:01.832882 | orchestrator | Monday 19 May 2025 14:53:33 +0000 (0:00:02.918) 0:04:12.150 ************ 2025-05-19 14:58:01.832892 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:01.832902 | orchestrator | 2025-05-19 14:58:01.832912 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 14:58:01.832921 | orchestrator | Monday 19 May 2025 14:53:35 +0000 (0:00:01.099) 0:04:13.250 ************ 2025-05-19 14:58:01.832931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.832993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.833183 | orchestrator | 2025-05-19 14:58:01.833193 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 14:58:01.833202 | orchestrator | Monday 19 May 2025 14:53:39 +0000 (0:00:04.745) 0:04:17.995 ************ 2025-05-19 14:58:01.833219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833274 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.833288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833296 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.833305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833334 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.833346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833362 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.833377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833393 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.833407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833423 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.833431 | orchestrator | 2025-05-19 14:58:01.833439 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 14:58:01.833446 | orchestrator | Monday 19 May 2025 14:53:43 +0000 (0:00:03.566) 0:04:21.561 ************ 2025-05-19 14:58:01.833458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833717 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.833725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833770 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.833778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833805 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.833814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833822 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.833830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.833839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.833851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833859 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.833868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.833881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.833894 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.833902 | orchestrator | 2025-05-19 14:58:01.833911 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 14:58:01.833919 | orchestrator | Monday 19 May 2025 14:53:46 +0000 (0:00:02.863) 0:04:24.425 ************ 2025-05-19 14:58:01.833927 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.833934 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.833942 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.833950 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 14:58:01.833958 | orchestrator | 2025-05-19 14:58:01.833966 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-19 14:58:01.833974 | orchestrator | Monday 19 May 2025 14:53:47 +0000 (0:00:01.075) 0:04:25.501 ************ 2025-05-19 14:58:01.833982 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:58:01.833990 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 14:58:01.833998 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 14:58:01.834070 | orchestrator | 2025-05-19 14:58:01.834090 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-19 14:58:01.834099 | orchestrator | Monday 19 May 2025 14:53:48 +0000 (0:00:01.106) 0:04:26.607 ************ 2025-05-19 14:58:01.834107 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 14:58:01.834114 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:58:01.834122 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 14:58:01.834130 | orchestrator | 2025-05-19 14:58:01.834138 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-19 14:58:01.834146 | orchestrator | Monday 19 May 2025 14:53:49 +0000 (0:00:00.975) 0:04:27.583 ************ 2025-05-19 14:58:01.834154 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:58:01.834176 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:58:01.834197 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:58:01.834205 | orchestrator | 2025-05-19 14:58:01.834213 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-19 14:58:01.834221 | orchestrator | Monday 19 May 2025 14:53:49 +0000 (0:00:00.393) 0:04:27.976 ************ 2025-05-19 14:58:01.834229 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:58:01.834237 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:58:01.834245 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:58:01.834253 | orchestrator | 2025-05-19 14:58:01.834261 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-19 14:58:01.834269 | orchestrator | Monday 19 May 2025 14:53:50 +0000 (0:00:00.479) 0:04:28.456 ************ 2025-05-19 14:58:01.834276 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 14:58:01.834284 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 14:58:01.834292 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 14:58:01.834300 | orchestrator | 2025-05-19 14:58:01.834308 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-19 14:58:01.834316 | orchestrator | Monday 19 May 2025 14:53:51 +0000 (0:00:01.250) 0:04:29.706 ************ 2025-05-19 14:58:01.834324 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 14:58:01.834333 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 14:58:01.834342 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 14:58:01.834351 | orchestrator | 2025-05-19 14:58:01.834360 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-19 14:58:01.834368 | orchestrator | Monday 19 May 2025 14:53:52 +0000 (0:00:01.173) 0:04:30.880 ************ 2025-05-19 14:58:01.834377 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 14:58:01.834397 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 14:58:01.834407 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 14:58:01.834416 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-19 14:58:01.834425 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-19 14:58:01.834434 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-19 14:58:01.834443 | orchestrator | 2025-05-19 14:58:01.834452 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-19 14:58:01.834461 | orchestrator | Monday 19 May 2025 14:53:57 +0000 (0:00:04.821) 0:04:35.701 ************ 2025-05-19 14:58:01.834470 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.834479 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.834487 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.834496 | orchestrator | 2025-05-19 14:58:01.834506 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-19 14:58:01.834514 | orchestrator | Monday 19 May 2025 14:53:57 +0000 (0:00:00.250) 0:04:35.952 ************ 2025-05-19 14:58:01.834523 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.834532 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.834541 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.834550 | orchestrator | 2025-05-19 14:58:01.834559 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-19 14:58:01.834568 | orchestrator | Monday 19 May 2025 14:53:58 +0000 (0:00:00.244) 0:04:36.196 ************ 2025-05-19 14:58:01.834577 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.834586 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.834595 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.834604 | orchestrator | 2025-05-19 14:58:01.834618 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-19 14:58:01.834628 | orchestrator | Monday 19 May 2025 14:53:59 +0000 (0:00:01.420) 0:04:37.617 ************ 2025-05-19 14:58:01.834637 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 14:58:01.834647 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 14:58:01.834656 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 14:58:01.834665 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 14:58:01.834674 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 14:58:01.834684 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 14:58:01.834693 | orchestrator | 2025-05-19 14:58:01.834702 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-19 14:58:01.834710 | orchestrator | Monday 19 May 2025 14:54:02 +0000 (0:00:02.976) 0:04:40.594 ************ 2025-05-19 14:58:01.834718 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:58:01.834726 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:58:01.834734 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:58:01.834742 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 14:58:01.834749 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.834757 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 14:58:01.834765 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.834772 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 14:58:01.834780 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.834793 | orchestrator | 2025-05-19 14:58:01.834801 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-19 14:58:01.834809 | orchestrator | Monday 19 May 2025 14:54:05 +0000 (0:00:03.417) 0:04:44.011 ************ 2025-05-19 14:58:01.834816 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.834824 | orchestrator | 2025-05-19 14:58:01.834832 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-19 14:58:01.834840 | orchestrator | Monday 19 May 2025 14:54:06 +0000 (0:00:00.183) 0:04:44.195 ************ 2025-05-19 14:58:01.834847 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.834855 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.834863 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.834870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.834878 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.834886 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.834893 | orchestrator | 2025-05-19 14:58:01.834901 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-19 14:58:01.834909 | orchestrator | Monday 19 May 2025 14:54:06 +0000 (0:00:00.761) 0:04:44.956 ************ 2025-05-19 14:58:01.834917 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 14:58:01.834924 | orchestrator | 2025-05-19 14:58:01.834932 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-19 14:58:01.834940 | orchestrator | Monday 19 May 2025 14:54:07 +0000 (0:00:00.678) 0:04:45.635 ************ 2025-05-19 14:58:01.834948 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.834955 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.834963 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.834971 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.834978 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.834986 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.834993 | orchestrator | 2025-05-19 14:58:01.835018 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-19 14:58:01.835026 | orchestrator | Monday 19 May 2025 14:54:08 +0000 (0:00:00.575) 0:04:46.211 ************ 2025-05-19 14:58:01.835042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835205 | orchestrator | 2025-05-19 14:58:01.835213 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-19 14:58:01.835221 | orchestrator | Monday 19 May 2025 14:54:12 +0000 (0:00:04.063) 0:04:50.274 ************ 2025-05-19 14:58:01.835229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.835238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.835250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.835258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.835272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.835285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.835293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.835644 | orchestrator | 2025-05-19 14:58:01.835652 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-19 14:58:01.835660 | orchestrator | Monday 19 May 2025 14:54:19 +0000 (0:00:07.128) 0:04:57.402 ************ 2025-05-19 14:58:01.835668 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.835676 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.835684 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.835692 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.835700 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.835707 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.835715 | orchestrator | 2025-05-19 14:58:01.835723 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-19 14:58:01.835737 | orchestrator | Monday 19 May 2025 14:54:20 +0000 (0:00:01.403) 0:04:58.806 ************ 2025-05-19 14:58:01.835745 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 14:58:01.835753 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 14:58:01.835760 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 14:58:01.835768 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 14:58:01.835780 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 14:58:01.835789 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 14:58:01.835796 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 14:58:01.835804 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.835812 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 14:58:01.835820 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.835828 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 14:58:01.835836 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.835844 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 14:58:01.835852 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 14:58:01.835860 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 14:58:01.835868 | orchestrator | 2025-05-19 14:58:01.835875 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-19 14:58:01.835883 | orchestrator | Monday 19 May 2025 14:54:24 +0000 (0:00:03.811) 0:05:02.617 ************ 2025-05-19 14:58:01.835891 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.835899 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.835907 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.835915 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.835922 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.835930 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.835938 | orchestrator | 2025-05-19 14:58:01.835946 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-19 14:58:01.835954 | orchestrator | Monday 19 May 2025 14:54:25 +0000 (0:00:00.792) 0:05:03.410 ************ 2025-05-19 14:58:01.835962 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 14:58:01.835970 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 14:58:01.835978 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 14:58:01.835986 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 14:58:01.835994 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 14:58:01.836049 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836059 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 14:58:01.836066 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836074 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836088 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836096 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836111 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836123 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 14:58:01.836131 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836139 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836147 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836155 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836163 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836171 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836178 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 14:58:01.836186 | orchestrator | 2025-05-19 14:58:01.836194 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-19 14:58:01.836204 | orchestrator | Monday 19 May 2025 14:54:31 +0000 (0:00:05.928) 0:05:09.338 ************ 2025-05-19 14:58:01.836212 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:58:01.836221 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:58:01.836235 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 14:58:01.836244 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:58:01.836253 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 14:58:01.836262 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:58:01.836271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:58:01.836280 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 14:58:01.836289 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 14:58:01.836298 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 14:58:01.836307 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:58:01.836316 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 14:58:01.836326 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836335 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:58:01.836344 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:58:01.836352 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 14:58:01.836362 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 14:58:01.836369 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 14:58:01.836377 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836385 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 14:58:01.836396 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836404 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:58:01.836411 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:58:01.836419 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 14:58:01.836427 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:58:01.836434 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:58:01.836442 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 14:58:01.836450 | orchestrator | 2025-05-19 14:58:01.836457 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-19 14:58:01.836465 | orchestrator | Monday 19 May 2025 14:54:39 +0000 (0:00:07.877) 0:05:17.216 ************ 2025-05-19 14:58:01.836472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.836480 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.836488 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.836495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836503 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836511 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836518 | orchestrator | 2025-05-19 14:58:01.836526 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-19 14:58:01.836534 | orchestrator | Monday 19 May 2025 14:54:39 +0000 (0:00:00.506) 0:05:17.723 ************ 2025-05-19 14:58:01.836542 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.836550 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.836557 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.836564 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836570 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836577 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836584 | orchestrator | 2025-05-19 14:58:01.836596 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-19 14:58:01.836602 | orchestrator | Monday 19 May 2025 14:54:40 +0000 (0:00:00.615) 0:05:18.338 ************ 2025-05-19 14:58:01.836609 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836616 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.836622 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836629 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836635 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.836642 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.836648 | orchestrator | 2025-05-19 14:58:01.836655 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-19 14:58:01.836662 | orchestrator | Monday 19 May 2025 14:54:42 +0000 (0:00:02.335) 0:05:20.674 ************ 2025-05-19 14:58:01.836674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.836681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.836693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836700 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.836707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.836717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.836724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836731 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.836743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.836756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836762 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 14:58:01.836777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 14:58:01.836787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836794 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.836801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.836812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836824 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 14:58:01.836838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 14:58:01.836845 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836851 | orchestrator | 2025-05-19 14:58:01.836858 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-19 14:58:01.836865 | orchestrator | Monday 19 May 2025 14:54:44 +0000 (0:00:02.428) 0:05:23.103 ************ 2025-05-19 14:58:01.836871 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 14:58:01.836878 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836885 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.836892 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 14:58:01.836898 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836905 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.836912 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 14:58:01.836918 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836925 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.836931 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 14:58:01.836938 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836944 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.836951 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 14:58:01.836957 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836964 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.836970 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 14:58:01.836977 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 14:58:01.836983 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.836990 | orchestrator | 2025-05-19 14:58:01.836996 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-19 14:58:01.837020 | orchestrator | Monday 19 May 2025 14:54:45 +0000 (0:00:00.557) 0:05:23.660 ************ 2025-05-19 14:58:01.837028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 14:58:01.837166 | orchestrator | 2025-05-19 14:58:01.837173 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 14:58:01.837180 | orchestrator | Monday 19 May 2025 14:54:48 +0000 (0:00:03.048) 0:05:26.709 ************ 2025-05-19 14:58:01.837186 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.837193 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.837199 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.837206 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.837212 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.837219 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.837225 | orchestrator | 2025-05-19 14:58:01.837232 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837238 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.479) 0:05:27.189 ************ 2025-05-19 14:58:01.837245 | orchestrator | 2025-05-19 14:58:01.837251 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837258 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.219) 0:05:27.408 ************ 2025-05-19 14:58:01.837264 | orchestrator | 2025-05-19 14:58:01.837271 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837277 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.117) 0:05:27.525 ************ 2025-05-19 14:58:01.837284 | orchestrator | 2025-05-19 14:58:01.837291 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837297 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.117) 0:05:27.643 ************ 2025-05-19 14:58:01.837304 | orchestrator | 2025-05-19 14:58:01.837310 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837321 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.115) 0:05:27.759 ************ 2025-05-19 14:58:01.837327 | orchestrator | 2025-05-19 14:58:01.837334 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 14:58:01.837340 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.113) 0:05:27.873 ************ 2025-05-19 14:58:01.837347 | orchestrator | 2025-05-19 14:58:01.837353 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-19 14:58:01.837360 | orchestrator | Monday 19 May 2025 14:54:49 +0000 (0:00:00.117) 0:05:27.990 ************ 2025-05-19 14:58:01.837366 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.837373 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.837380 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.837386 | orchestrator | 2025-05-19 14:58:01.837393 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-19 14:58:01.837399 | orchestrator | Monday 19 May 2025 14:55:01 +0000 (0:00:11.766) 0:05:39.757 ************ 2025-05-19 14:58:01.837406 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.837412 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.837419 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.837425 | orchestrator | 2025-05-19 14:58:01.837435 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-19 14:58:01.837442 | orchestrator | Monday 19 May 2025 14:55:18 +0000 (0:00:17.384) 0:05:57.141 ************ 2025-05-19 14:58:01.837449 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.837455 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.837462 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.837468 | orchestrator | 2025-05-19 14:58:01.837475 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-19 14:58:01.837481 | orchestrator | Monday 19 May 2025 14:55:44 +0000 (0:00:25.971) 0:06:23.113 ************ 2025-05-19 14:58:01.837488 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.837494 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.837501 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.837508 | orchestrator | 2025-05-19 14:58:01.837514 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-19 14:58:01.837521 | orchestrator | Monday 19 May 2025 14:56:29 +0000 (0:00:44.770) 0:07:07.883 ************ 2025-05-19 14:58:01.837528 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.837534 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.837540 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.837547 | orchestrator | 2025-05-19 14:58:01.837553 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-19 14:58:01.837560 | orchestrator | Monday 19 May 2025 14:56:30 +0000 (0:00:00.989) 0:07:08.873 ************ 2025-05-19 14:58:01.837566 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.837573 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.837580 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.837586 | orchestrator | 2025-05-19 14:58:01.837593 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-19 14:58:01.837602 | orchestrator | Monday 19 May 2025 14:56:31 +0000 (0:00:00.849) 0:07:09.722 ************ 2025-05-19 14:58:01.837609 | orchestrator | changed: [testbed-node-5] 2025-05-19 14:58:01.837616 | orchestrator | changed: [testbed-node-4] 2025-05-19 14:58:01.837623 | orchestrator | changed: [testbed-node-3] 2025-05-19 14:58:01.837629 | orchestrator | 2025-05-19 14:58:01.837636 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-19 14:58:01.837643 | orchestrator | Monday 19 May 2025 14:56:56 +0000 (0:00:25.101) 0:07:34.824 ************ 2025-05-19 14:58:01.837649 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.837656 | orchestrator | 2025-05-19 14:58:01.837662 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-19 14:58:01.837669 | orchestrator | Monday 19 May 2025 14:56:56 +0000 (0:00:00.124) 0:07:34.949 ************ 2025-05-19 14:58:01.837676 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.837687 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.837693 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.837700 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.837707 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.837714 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-19 14:58:01.837720 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:58:01.837727 | orchestrator | 2025-05-19 14:58:01.837734 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-19 14:58:01.837740 | orchestrator | Monday 19 May 2025 14:57:18 +0000 (0:00:22.044) 0:07:56.994 ************ 2025-05-19 14:58:01.837747 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.837753 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.837760 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.837766 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.837772 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.837779 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.837785 | orchestrator | 2025-05-19 14:58:01.837792 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-19 14:58:01.837798 | orchestrator | Monday 19 May 2025 14:57:25 +0000 (0:00:07.111) 0:08:04.105 ************ 2025-05-19 14:58:01.837805 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.837812 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.837819 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.837825 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.837831 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.837838 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-05-19 14:58:01.837845 | orchestrator | 2025-05-19 14:58:01.837851 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 14:58:01.837858 | orchestrator | Monday 19 May 2025 14:57:29 +0000 (0:00:03.242) 0:08:07.348 ************ 2025-05-19 14:58:01.837864 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:58:01.837871 | orchestrator | 2025-05-19 14:58:01.837878 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 14:58:01.837884 | orchestrator | Monday 19 May 2025 14:57:40 +0000 (0:00:11.361) 0:08:18.710 ************ 2025-05-19 14:58:01.837890 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:58:01.837897 | orchestrator | 2025-05-19 14:58:01.837904 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-19 14:58:01.837910 | orchestrator | Monday 19 May 2025 14:57:41 +0000 (0:00:01.176) 0:08:19.887 ************ 2025-05-19 14:58:01.837917 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.837923 | orchestrator | 2025-05-19 14:58:01.837930 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-19 14:58:01.837936 | orchestrator | Monday 19 May 2025 14:57:42 +0000 (0:00:01.135) 0:08:21.023 ************ 2025-05-19 14:58:01.837943 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-19 14:58:01.837949 | orchestrator | 2025-05-19 14:58:01.837956 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-19 14:58:01.837962 | orchestrator | Monday 19 May 2025 14:57:52 +0000 (0:00:09.933) 0:08:30.957 ************ 2025-05-19 14:58:01.837969 | orchestrator | ok: [testbed-node-3] 2025-05-19 14:58:01.837975 | orchestrator | ok: [testbed-node-4] 2025-05-19 14:58:01.837982 | orchestrator | ok: [testbed-node-5] 2025-05-19 14:58:01.837988 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:01.838012 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:58:01.838041 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:58:01.838048 | orchestrator | 2025-05-19 14:58:01.838054 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-19 14:58:01.838061 | orchestrator | 2025-05-19 14:58:01.838068 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-19 14:58:01.838079 | orchestrator | Monday 19 May 2025 14:57:54 +0000 (0:00:01.560) 0:08:32.517 ************ 2025-05-19 14:58:01.838086 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:01.838092 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:01.838099 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:01.838106 | orchestrator | 2025-05-19 14:58:01.838113 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-19 14:58:01.838119 | orchestrator | 2025-05-19 14:58:01.838126 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-19 14:58:01.838132 | orchestrator | Monday 19 May 2025 14:57:55 +0000 (0:00:01.025) 0:08:33.543 ************ 2025-05-19 14:58:01.838139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.838145 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.838152 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.838158 | orchestrator | 2025-05-19 14:58:01.838165 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-19 14:58:01.838172 | orchestrator | 2025-05-19 14:58:01.838178 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-19 14:58:01.838185 | orchestrator | Monday 19 May 2025 14:57:55 +0000 (0:00:00.492) 0:08:34.036 ************ 2025-05-19 14:58:01.838191 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-19 14:58:01.838202 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 14:58:01.838209 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838216 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-19 14:58:01.838222 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-19 14:58:01.838229 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838236 | orchestrator | skipping: [testbed-node-3] 2025-05-19 14:58:01.838242 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-19 14:58:01.838249 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 14:58:01.838255 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838262 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-19 14:58:01.838269 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-19 14:58:01.838275 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838282 | orchestrator | skipping: [testbed-node-4] 2025-05-19 14:58:01.838289 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-19 14:58:01.838295 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 14:58:01.838302 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838309 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-19 14:58:01.838315 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-19 14:58:01.838322 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838329 | orchestrator | skipping: [testbed-node-5] 2025-05-19 14:58:01.838335 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-19 14:58:01.838342 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 14:58:01.838349 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838355 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-19 14:58:01.838362 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-19 14:58:01.838368 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838375 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.838381 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-19 14:58:01.838388 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 14:58:01.838399 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838405 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-19 14:58:01.838412 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-19 14:58:01.838418 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838425 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.838432 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-19 14:58:01.838438 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 14:58:01.838445 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 14:58:01.838451 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-19 14:58:01.838458 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-19 14:58:01.838464 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-19 14:58:01.838471 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.838478 | orchestrator | 2025-05-19 14:58:01.838484 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-19 14:58:01.838491 | orchestrator | 2025-05-19 14:58:01.838497 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-19 14:58:01.838504 | orchestrator | Monday 19 May 2025 14:57:57 +0000 (0:00:01.226) 0:08:35.262 ************ 2025-05-19 14:58:01.838511 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-19 14:58:01.838518 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-19 14:58:01.838524 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.838531 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-19 14:58:01.838541 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-19 14:58:01.838547 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.838554 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-19 14:58:01.838561 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-19 14:58:01.838567 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.838574 | orchestrator | 2025-05-19 14:58:01.838581 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-19 14:58:01.838587 | orchestrator | 2025-05-19 14:58:01.838594 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-19 14:58:01.838601 | orchestrator | Monday 19 May 2025 14:57:57 +0000 (0:00:00.707) 0:08:35.970 ************ 2025-05-19 14:58:01.838607 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.838614 | orchestrator | 2025-05-19 14:58:01.838620 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-19 14:58:01.838627 | orchestrator | 2025-05-19 14:58:01.838633 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-19 14:58:01.838640 | orchestrator | Monday 19 May 2025 14:57:58 +0000 (0:00:00.652) 0:08:36.623 ************ 2025-05-19 14:58:01.838646 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:01.838653 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:01.838660 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:01.838666 | orchestrator | 2025-05-19 14:58:01.838673 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:58:01.838680 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 14:58:01.838690 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-19 14:58:01.838697 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 14:58:01.838704 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 14:58:01.838715 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-19 14:58:01.838722 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-19 14:58:01.838728 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-19 14:58:01.838735 | orchestrator | 2025-05-19 14:58:01.838741 | orchestrator | 2025-05-19 14:58:01.838748 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:58:01.838755 | orchestrator | Monday 19 May 2025 14:57:58 +0000 (0:00:00.405) 0:08:37.028 ************ 2025-05-19 14:58:01.838762 | orchestrator | =============================================================================== 2025-05-19 14:58:01.838768 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.77s 2025-05-19 14:58:01.838775 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.76s 2025-05-19 14:58:01.838782 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.97s 2025-05-19 14:58:01.838788 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.10s 2025-05-19 14:58:01.838795 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.04s 2025-05-19 14:58:01.838801 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.41s 2025-05-19 14:58:01.838808 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.58s 2025-05-19 14:58:01.838815 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.38s 2025-05-19 14:58:01.838821 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.92s 2025-05-19 14:58:01.838828 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.77s 2025-05-19 14:58:01.838834 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.56s 2025-05-19 14:58:01.838841 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.43s 2025-05-19 14:58:01.838847 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.36s 2025-05-19 14:58:01.838854 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.17s 2025-05-19 14:58:01.838861 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 11.12s 2025-05-19 14:58:01.838867 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.93s 2025-05-19 14:58:01.838874 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.39s 2025-05-19 14:58:01.838880 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.27s 2025-05-19 14:58:01.838887 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.88s 2025-05-19 14:58:01.838894 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.80s 2025-05-19 14:58:01.838900 | orchestrator | 2025-05-19 14:58:01 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:58:01.838910 | orchestrator | 2025-05-19 14:58:01 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:01.838917 | orchestrator | 2025-05-19 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:04.879861 | orchestrator | 2025-05-19 14:58:04 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:58:04.881702 | orchestrator | 2025-05-19 14:58:04 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:04.881741 | orchestrator | 2025-05-19 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:07.927908 | orchestrator | 2025-05-19 14:58:07 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:58:07.930956 | orchestrator | 2025-05-19 14:58:07 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:07.931124 | orchestrator | 2025-05-19 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:10.976598 | orchestrator | 2025-05-19 14:58:10 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state STARTED 2025-05-19 14:58:10.977715 | orchestrator | 2025-05-19 14:58:10 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:10.977809 | orchestrator | 2025-05-19 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:14.020271 | orchestrator | 2025-05-19 14:58:14.020374 | orchestrator | 2025-05-19 14:58:14.020387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 14:58:14.020400 | orchestrator | 2025-05-19 14:58:14.020413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 14:58:14.020432 | orchestrator | Monday 19 May 2025 14:55:55 +0000 (0:00:00.467) 0:00:00.467 ************ 2025-05-19 14:58:14.020445 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:14.020460 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:58:14.020473 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:58:14.020488 | orchestrator | 2025-05-19 14:58:14.020501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 14:58:14.020514 | orchestrator | Monday 19 May 2025 14:55:55 +0000 (0:00:00.453) 0:00:00.921 ************ 2025-05-19 14:58:14.020526 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-19 14:58:14.020539 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-19 14:58:14.020550 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-19 14:58:14.020564 | orchestrator | 2025-05-19 14:58:14.020575 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-19 14:58:14.020586 | orchestrator | 2025-05-19 14:58:14.020600 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 14:58:14.020612 | orchestrator | Monday 19 May 2025 14:55:56 +0000 (0:00:00.743) 0:00:01.664 ************ 2025-05-19 14:58:14.020626 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:14.020641 | orchestrator | 2025-05-19 14:58:14.020654 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-19 14:58:14.020669 | orchestrator | Monday 19 May 2025 14:55:57 +0000 (0:00:00.601) 0:00:02.266 ************ 2025-05-19 14:58:14.020686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.020706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.020769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.020780 | orchestrator | 2025-05-19 14:58:14.020790 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-19 14:58:14.020800 | orchestrator | Monday 19 May 2025 14:55:57 +0000 (0:00:00.810) 0:00:03.077 ************ 2025-05-19 14:58:14.020934 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-19 14:58:14.020947 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-19 14:58:14.020956 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:58:14.020965 | orchestrator | 2025-05-19 14:58:14.020975 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 14:58:14.020984 | orchestrator | Monday 19 May 2025 14:55:58 +0000 (0:00:01.081) 0:00:04.158 ************ 2025-05-19 14:58:14.020993 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 14:58:14.021026 | orchestrator | 2025-05-19 14:58:14.021268 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-19 14:58:14.021282 | orchestrator | Monday 19 May 2025 14:55:59 +0000 (0:00:00.839) 0:00:04.997 ************ 2025-05-19 14:58:14.021308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021335 | orchestrator | 2025-05-19 14:58:14.021343 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-19 14:58:14.021361 | orchestrator | Monday 19 May 2025 14:56:01 +0000 (0:00:01.497) 0:00:06.495 ************ 2025-05-19 14:58:14.021370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021379 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.021396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021405 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.021421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021430 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.021438 | orchestrator | 2025-05-19 14:58:14.021446 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-19 14:58:14.021454 | orchestrator | Monday 19 May 2025 14:56:01 +0000 (0:00:00.346) 0:00:06.842 ************ 2025-05-19 14:58:14.021463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021471 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.021480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021494 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.021502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 14:58:14.021511 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.021519 | orchestrator | 2025-05-19 14:58:14.021527 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-19 14:58:14.021535 | orchestrator | Monday 19 May 2025 14:56:02 +0000 (0:00:00.816) 0:00:07.659 ************ 2025-05-19 14:58:14.021547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021580 | orchestrator | 2025-05-19 14:58:14.021589 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-19 14:58:14.021597 | orchestrator | Monday 19 May 2025 14:56:03 +0000 (0:00:01.118) 0:00:08.777 ************ 2025-05-19 14:58:14.021606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.021637 | orchestrator | 2025-05-19 14:58:14.021645 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-19 14:58:14.021705 | orchestrator | Monday 19 May 2025 14:56:04 +0000 (0:00:01.204) 0:00:09.982 ************ 2025-05-19 14:58:14.021715 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.021723 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.021735 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.021743 | orchestrator | 2025-05-19 14:58:14.021751 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-19 14:58:14.021759 | orchestrator | Monday 19 May 2025 14:56:05 +0000 (0:00:00.490) 0:00:10.472 ************ 2025-05-19 14:58:14.021806 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 14:58:14.021817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 14:58:14.022208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 14:58:14.022223 | orchestrator | 2025-05-19 14:58:14.022231 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-19 14:58:14.022239 | orchestrator | Monday 19 May 2025 14:56:06 +0000 (0:00:01.353) 0:00:11.826 ************ 2025-05-19 14:58:14.022247 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 14:58:14.022255 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 14:58:14.022263 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 14:58:14.022271 | orchestrator | 2025-05-19 14:58:14.022279 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-19 14:58:14.022287 | orchestrator | Monday 19 May 2025 14:56:07 +0000 (0:00:01.207) 0:00:13.033 ************ 2025-05-19 14:58:14.022327 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 14:58:14.022336 | orchestrator | 2025-05-19 14:58:14.022344 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-19 14:58:14.022352 | orchestrator | Monday 19 May 2025 14:56:08 +0000 (0:00:00.725) 0:00:13.759 ************ 2025-05-19 14:58:14.022360 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-19 14:58:14.022367 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-19 14:58:14.022375 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:14.022393 | orchestrator | ok: [testbed-node-1] 2025-05-19 14:58:14.022401 | orchestrator | ok: [testbed-node-2] 2025-05-19 14:58:14.022409 | orchestrator | 2025-05-19 14:58:14.022447 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-19 14:58:14.022455 | orchestrator | Monday 19 May 2025 14:56:09 +0000 (0:00:00.664) 0:00:14.424 ************ 2025-05-19 14:58:14.022463 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.022471 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.022479 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.022487 | orchestrator | 2025-05-19 14:58:14.022494 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-19 14:58:14.022502 | orchestrator | Monday 19 May 2025 14:56:09 +0000 (0:00:00.460) 0:00:14.884 ************ 2025-05-19 14:58:14.022512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339931, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8862312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339931, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8862312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339931, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8862312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339926, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339926, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339926, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339923, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8572307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339923, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8572307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339923, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8572307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339929, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8662307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339929, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8662307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339929, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8662307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1339911, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1339911, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1339911, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339924, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8582306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339924, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8582306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339924, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8582306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339928, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8652308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339928, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8652308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339928, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8652308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1339908, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8482306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1339908, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8482306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1339908, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8482306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022860 | orchestrator | changed: [testbed-node-0] => (item={'key': 2025-05-19 14:58:14 | INFO  | Task 66951634-f866-4628-8723-583d7373f2c4 is in state SUCCESS 2025-05-19 14:58:14.022872 | orchestrator | 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1339257, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5692263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1339257, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5692263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1339257, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5692263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1339914, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1339914, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1339914, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8492305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1339260, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5722263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1339260, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5722263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1339260, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5722263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339927, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8642309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.022997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339927, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8642309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339927, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8642309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1339921, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8532307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1339921, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8532307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1339921, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8532307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339930, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.867231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339930, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.867231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339930, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.867231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1339905, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8472304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1339905, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8472304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1339905, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8472304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339925, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339925, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339925, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8632307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1339258, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5712264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1339258, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5712264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1339258, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.5712264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1339261, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8452306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1339261, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8452306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1339261, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8452306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339922, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8542306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339922, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8542306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339922, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8542306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339969, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9112315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339969, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9112315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339969, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9112315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339963, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9012313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339963, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9012313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339963, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9012313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339950, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8882313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339950, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8882313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339950, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8882313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339976, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9222317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339976, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9222317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339976, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9222317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339953, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8892312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339953, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8892312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339953, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8892312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339974, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9182317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339974, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9182317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339974, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9182317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339977, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9242318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339977, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9242318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339977, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9242318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339970, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9142315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339970, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9142315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339970, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9142315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339972, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9172316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339972, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9172316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339972, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9172316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339956, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8902311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339956, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8902311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339956, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8902311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339964, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339964, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339964, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339978, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9252317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339978, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9252317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339978, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9252317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339975, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9212317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339975, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9212317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339975, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9212317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339959, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8932312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339959, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8932312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339959, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8932312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339958, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8912313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339958, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8912313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339958, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8912313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.023992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339960, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8952312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339960, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8952312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339960, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.8952312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339961, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9002314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339961, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9002314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339961, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9002314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339966, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339966, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339966, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9022315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339971, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9162316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339971, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9162316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339971, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9162316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339968, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9032314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339968, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9032314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339968, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9032314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339980, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9262319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339980, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9262319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339980, 'dev': 174, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747663401.9262319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 14:58:14.024209 | orchestrator | 2025-05-19 14:58:14.024218 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-19 14:58:14.024226 | orchestrator | Monday 19 May 2025 14:56:45 +0000 (0:00:36.177) 0:00:51.062 ************ 2025-05-19 14:58:14.024234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.024242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.024255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 14:58:14.024263 | orchestrator | 2025-05-19 14:58:14.024271 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-19 14:58:14.024279 | orchestrator | Monday 19 May 2025 14:56:46 +0000 (0:00:00.941) 0:00:52.003 ************ 2025-05-19 14:58:14.024287 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:14.024295 | orchestrator | 2025-05-19 14:58:14.024303 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-19 14:58:14.024311 | orchestrator | Monday 19 May 2025 14:56:49 +0000 (0:00:02.170) 0:00:54.174 ************ 2025-05-19 14:58:14.024318 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:14.024326 | orchestrator | 2025-05-19 14:58:14.024334 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 14:58:14.024346 | orchestrator | Monday 19 May 2025 14:56:51 +0000 (0:00:02.429) 0:00:56.604 ************ 2025-05-19 14:58:14.024354 | orchestrator | 2025-05-19 14:58:14.024362 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 14:58:14.024375 | orchestrator | Monday 19 May 2025 14:56:51 +0000 (0:00:00.064) 0:00:56.668 ************ 2025-05-19 14:58:14.024383 | orchestrator | 2025-05-19 14:58:14.024391 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 14:58:14.024398 | orchestrator | Monday 19 May 2025 14:56:51 +0000 (0:00:00.069) 0:00:56.738 ************ 2025-05-19 14:58:14.024406 | orchestrator | 2025-05-19 14:58:14.024414 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-19 14:58:14.024422 | orchestrator | Monday 19 May 2025 14:56:51 +0000 (0:00:00.072) 0:00:56.811 ************ 2025-05-19 14:58:14.024429 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.024437 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.024445 | orchestrator | changed: [testbed-node-0] 2025-05-19 14:58:14.024453 | orchestrator | 2025-05-19 14:58:14.024460 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-19 14:58:14.024468 | orchestrator | Monday 19 May 2025 14:56:53 +0000 (0:00:02.042) 0:00:58.854 ************ 2025-05-19 14:58:14.024476 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.024483 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.024491 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-19 14:58:14.024499 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-19 14:58:14.024507 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-19 14:58:14.024515 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:14.024523 | orchestrator | 2025-05-19 14:58:14.024530 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-19 14:58:14.024538 | orchestrator | Monday 19 May 2025 14:57:31 +0000 (0:00:38.166) 0:01:37.021 ************ 2025-05-19 14:58:14.024546 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.024554 | orchestrator | changed: [testbed-node-2] 2025-05-19 14:58:14.024562 | orchestrator | changed: [testbed-node-1] 2025-05-19 14:58:14.024569 | orchestrator | 2025-05-19 14:58:14.024577 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-19 14:58:14.024585 | orchestrator | Monday 19 May 2025 14:58:06 +0000 (0:00:35.132) 0:02:12.153 ************ 2025-05-19 14:58:14.024593 | orchestrator | ok: [testbed-node-0] 2025-05-19 14:58:14.024601 | orchestrator | 2025-05-19 14:58:14.024608 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-19 14:58:14.024616 | orchestrator | Monday 19 May 2025 14:58:09 +0000 (0:00:02.224) 0:02:14.378 ************ 2025-05-19 14:58:14.024624 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.024632 | orchestrator | skipping: [testbed-node-1] 2025-05-19 14:58:14.024639 | orchestrator | skipping: [testbed-node-2] 2025-05-19 14:58:14.024647 | orchestrator | 2025-05-19 14:58:14.024655 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-19 14:58:14.024663 | orchestrator | Monday 19 May 2025 14:58:09 +0000 (0:00:00.284) 0:02:14.662 ************ 2025-05-19 14:58:14.024672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-19 14:58:14.024681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-19 14:58:14.024690 | orchestrator | 2025-05-19 14:58:14.024697 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-19 14:58:14.024709 | orchestrator | Monday 19 May 2025 14:58:11 +0000 (0:00:02.276) 0:02:16.939 ************ 2025-05-19 14:58:14.024721 | orchestrator | skipping: [testbed-node-0] 2025-05-19 14:58:14.024729 | orchestrator | 2025-05-19 14:58:14.024737 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 14:58:14.024745 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 14:58:14.024754 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 14:58:14.024762 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 14:58:14.024770 | orchestrator | 2025-05-19 14:58:14.024778 | orchestrator | 2025-05-19 14:58:14.024785 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 14:58:14.024793 | orchestrator | Monday 19 May 2025 14:58:12 +0000 (0:00:00.253) 0:02:17.193 ************ 2025-05-19 14:58:14.024801 | orchestrator | =============================================================================== 2025-05-19 14:58:14.024809 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.17s 2025-05-19 14:58:14.024816 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.18s 2025-05-19 14:58:14.024824 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.13s 2025-05-19 14:58:14.024836 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.43s 2025-05-19 14:58:14.024844 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.28s 2025-05-19 14:58:14.024852 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.22s 2025-05-19 14:58:14.024859 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.17s 2025-05-19 14:58:14.024867 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.04s 2025-05-19 14:58:14.024875 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2025-05-19 14:58:14.024883 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.35s 2025-05-19 14:58:14.024890 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2025-05-19 14:58:14.024898 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.20s 2025-05-19 14:58:14.024906 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.12s 2025-05-19 14:58:14.024913 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.08s 2025-05-19 14:58:14.024921 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2025-05-19 14:58:14.024929 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.84s 2025-05-19 14:58:14.024937 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.82s 2025-05-19 14:58:14.024944 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2025-05-19 14:58:14.024952 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-05-19 14:58:14.024960 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.73s 2025-05-19 14:58:14.024968 | orchestrator | 2025-05-19 14:58:14 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:14.024976 | orchestrator | 2025-05-19 14:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:17.071656 | orchestrator | 2025-05-19 14:58:17 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:17.071796 | orchestrator | 2025-05-19 14:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:20.122131 | orchestrator | 2025-05-19 14:58:20 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:20.122270 | orchestrator | 2025-05-19 14:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:23.170953 | orchestrator | 2025-05-19 14:58:23 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:23.171097 | orchestrator | 2025-05-19 14:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:26.217469 | orchestrator | 2025-05-19 14:58:26 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:26.217584 | orchestrator | 2025-05-19 14:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:29.266820 | orchestrator | 2025-05-19 14:58:29 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:29.266925 | orchestrator | 2025-05-19 14:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:32.317549 | orchestrator | 2025-05-19 14:58:32 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:32.317660 | orchestrator | 2025-05-19 14:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:35.365747 | orchestrator | 2025-05-19 14:58:35 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:35.365850 | orchestrator | 2025-05-19 14:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:38.414425 | orchestrator | 2025-05-19 14:58:38 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:38.415798 | orchestrator | 2025-05-19 14:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:41.454532 | orchestrator | 2025-05-19 14:58:41 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:41.454639 | orchestrator | 2025-05-19 14:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:44.505607 | orchestrator | 2025-05-19 14:58:44 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:44.505720 | orchestrator | 2025-05-19 14:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:47.552106 | orchestrator | 2025-05-19 14:58:47 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:47.552214 | orchestrator | 2025-05-19 14:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:50.599740 | orchestrator | 2025-05-19 14:58:50 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:50.599865 | orchestrator | 2025-05-19 14:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:53.652076 | orchestrator | 2025-05-19 14:58:53 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:53.652179 | orchestrator | 2025-05-19 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:56.702612 | orchestrator | 2025-05-19 14:58:56 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:56.702718 | orchestrator | 2025-05-19 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:58:59.744181 | orchestrator | 2025-05-19 14:58:59 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:58:59.744295 | orchestrator | 2025-05-19 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:02.795773 | orchestrator | 2025-05-19 14:59:02 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:02.795869 | orchestrator | 2025-05-19 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:05.845132 | orchestrator | 2025-05-19 14:59:05 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:05.845233 | orchestrator | 2025-05-19 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:08.894987 | orchestrator | 2025-05-19 14:59:08 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:08.895151 | orchestrator | 2025-05-19 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:11.942107 | orchestrator | 2025-05-19 14:59:11 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:11.942235 | orchestrator | 2025-05-19 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:14.989906 | orchestrator | 2025-05-19 14:59:14 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:14.990138 | orchestrator | 2025-05-19 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:18.038870 | orchestrator | 2025-05-19 14:59:18 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:18.038982 | orchestrator | 2025-05-19 14:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:21.087077 | orchestrator | 2025-05-19 14:59:21 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:21.087170 | orchestrator | 2025-05-19 14:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:24.137608 | orchestrator | 2025-05-19 14:59:24 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:24.137711 | orchestrator | 2025-05-19 14:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:27.184797 | orchestrator | 2025-05-19 14:59:27 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:27.184910 | orchestrator | 2025-05-19 14:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:30.227550 | orchestrator | 2025-05-19 14:59:30 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:30.227668 | orchestrator | 2025-05-19 14:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:33.281739 | orchestrator | 2025-05-19 14:59:33 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:33.281852 | orchestrator | 2025-05-19 14:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:36.323690 | orchestrator | 2025-05-19 14:59:36 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:36.323805 | orchestrator | 2025-05-19 14:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:39.373650 | orchestrator | 2025-05-19 14:59:39 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:39.373761 | orchestrator | 2025-05-19 14:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:42.427497 | orchestrator | 2025-05-19 14:59:42 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:42.427603 | orchestrator | 2025-05-19 14:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:45.473192 | orchestrator | 2025-05-19 14:59:45 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:45.473289 | orchestrator | 2025-05-19 14:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:48.516879 | orchestrator | 2025-05-19 14:59:48 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:48.517074 | orchestrator | 2025-05-19 14:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:51.563340 | orchestrator | 2025-05-19 14:59:51 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:51.563480 | orchestrator | 2025-05-19 14:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:54.613477 | orchestrator | 2025-05-19 14:59:54 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:54.613581 | orchestrator | 2025-05-19 14:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 14:59:57.656051 | orchestrator | 2025-05-19 14:59:57 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 14:59:57.656158 | orchestrator | 2025-05-19 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:00.705057 | orchestrator | 2025-05-19 15:00:00 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:00.705170 | orchestrator | 2025-05-19 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:03.753327 | orchestrator | 2025-05-19 15:00:03 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:03.753434 | orchestrator | 2025-05-19 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:06.807682 | orchestrator | 2025-05-19 15:00:06 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:06.807786 | orchestrator | 2025-05-19 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:09.859635 | orchestrator | 2025-05-19 15:00:09 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:09.860180 | orchestrator | 2025-05-19 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:12.903791 | orchestrator | 2025-05-19 15:00:12 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:12.903924 | orchestrator | 2025-05-19 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:15.959026 | orchestrator | 2025-05-19 15:00:15 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:15.959127 | orchestrator | 2025-05-19 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:19.005202 | orchestrator | 2025-05-19 15:00:19 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:19.005282 | orchestrator | 2025-05-19 15:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:22.059841 | orchestrator | 2025-05-19 15:00:22 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:22.059942 | orchestrator | 2025-05-19 15:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:25.109777 | orchestrator | 2025-05-19 15:00:25 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:25.109889 | orchestrator | 2025-05-19 15:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:28.160066 | orchestrator | 2025-05-19 15:00:28 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:28.160180 | orchestrator | 2025-05-19 15:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:31.209403 | orchestrator | 2025-05-19 15:00:31 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:31.209514 | orchestrator | 2025-05-19 15:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:34.258859 | orchestrator | 2025-05-19 15:00:34 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:34.259034 | orchestrator | 2025-05-19 15:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:37.306576 | orchestrator | 2025-05-19 15:00:37 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:37.306714 | orchestrator | 2025-05-19 15:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:40.363081 | orchestrator | 2025-05-19 15:00:40 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:40.363242 | orchestrator | 2025-05-19 15:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:43.415415 | orchestrator | 2025-05-19 15:00:43 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state STARTED 2025-05-19 15:00:43.415527 | orchestrator | 2025-05-19 15:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 15:00:46.467422 | orchestrator | 2025-05-19 15:00:46 | INFO  | Task 5117b513-56dc-43a5-93d4-7fcd877f61e6 is in state SUCCESS 2025-05-19 15:00:46.469209 | orchestrator | 2025-05-19 15:00:46.469256 | orchestrator | 2025-05-19 15:00:46.469269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 15:00:46.469281 | orchestrator | 2025-05-19 15:00:46.469293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 15:00:46.469305 | orchestrator | Monday 19 May 2025 14:56:04 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-19 15:00:46.469423 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.469757 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:00:46.469774 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:00:46.469785 | orchestrator | 2025-05-19 15:00:46.469796 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 15:00:46.469807 | orchestrator | Monday 19 May 2025 14:56:05 +0000 (0:00:00.334) 0:00:00.596 ************ 2025-05-19 15:00:46.469818 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-19 15:00:46.469830 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-19 15:00:46.469840 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-19 15:00:46.469851 | orchestrator | 2025-05-19 15:00:46.469862 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-19 15:00:46.469873 | orchestrator | 2025-05-19 15:00:46.469884 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.469894 | orchestrator | Monday 19 May 2025 14:56:05 +0000 (0:00:00.395) 0:00:00.991 ************ 2025-05-19 15:00:46.469905 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:00:46.469916 | orchestrator | 2025-05-19 15:00:46.469927 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-19 15:00:46.469938 | orchestrator | Monday 19 May 2025 14:56:05 +0000 (0:00:00.538) 0:00:01.530 ************ 2025-05-19 15:00:46.469949 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-19 15:00:46.469960 | orchestrator | 2025-05-19 15:00:46.469970 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-19 15:00:46.470007 | orchestrator | Monday 19 May 2025 14:56:09 +0000 (0:00:03.264) 0:00:04.794 ************ 2025-05-19 15:00:46.470100 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-19 15:00:46.470117 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-19 15:00:46.470128 | orchestrator | 2025-05-19 15:00:46.470140 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-19 15:00:46.470151 | orchestrator | Monday 19 May 2025 14:56:15 +0000 (0:00:06.576) 0:00:11.371 ************ 2025-05-19 15:00:46.470162 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 15:00:46.470172 | orchestrator | 2025-05-19 15:00:46.470183 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-19 15:00:46.470194 | orchestrator | Monday 19 May 2025 14:56:18 +0000 (0:00:03.157) 0:00:14.529 ************ 2025-05-19 15:00:46.470204 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 15:00:46.470215 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 15:00:46.470580 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 15:00:46.470594 | orchestrator | 2025-05-19 15:00:46.470605 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-19 15:00:46.470616 | orchestrator | Monday 19 May 2025 14:56:26 +0000 (0:00:07.757) 0:00:22.287 ************ 2025-05-19 15:00:46.470627 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 15:00:46.470638 | orchestrator | 2025-05-19 15:00:46.470648 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-19 15:00:46.470659 | orchestrator | Monday 19 May 2025 14:56:29 +0000 (0:00:03.253) 0:00:25.541 ************ 2025-05-19 15:00:46.470670 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 15:00:46.470680 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 15:00:46.470691 | orchestrator | 2025-05-19 15:00:46.470702 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-19 15:00:46.470712 | orchestrator | Monday 19 May 2025 14:56:37 +0000 (0:00:07.194) 0:00:32.735 ************ 2025-05-19 15:00:46.470737 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-19 15:00:46.470748 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-19 15:00:46.470759 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-19 15:00:46.470770 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-19 15:00:46.470780 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-19 15:00:46.470791 | orchestrator | 2025-05-19 15:00:46.470802 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.470813 | orchestrator | Monday 19 May 2025 14:56:51 +0000 (0:00:14.843) 0:00:47.579 ************ 2025-05-19 15:00:46.470823 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:00:46.470835 | orchestrator | 2025-05-19 15:00:46.470846 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-19 15:00:46.470856 | orchestrator | Monday 19 May 2025 14:56:53 +0000 (0:00:01.108) 0:00:48.687 ************ 2025-05-19 15:00:46.470867 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.470878 | orchestrator | 2025-05-19 15:00:46.470888 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-19 15:00:46.470899 | orchestrator | Monday 19 May 2025 14:56:58 +0000 (0:00:05.617) 0:00:54.305 ************ 2025-05-19 15:00:46.470910 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.470920 | orchestrator | 2025-05-19 15:00:46.470931 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-19 15:00:46.471003 | orchestrator | Monday 19 May 2025 14:57:03 +0000 (0:00:04.520) 0:00:58.826 ************ 2025-05-19 15:00:46.471017 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.471028 | orchestrator | 2025-05-19 15:00:46.471039 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-19 15:00:46.471049 | orchestrator | Monday 19 May 2025 14:57:06 +0000 (0:00:03.150) 0:01:01.976 ************ 2025-05-19 15:00:46.471060 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-19 15:00:46.471071 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-19 15:00:46.471081 | orchestrator | 2025-05-19 15:00:46.471092 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-19 15:00:46.471103 | orchestrator | Monday 19 May 2025 14:57:17 +0000 (0:00:10.666) 0:01:12.643 ************ 2025-05-19 15:00:46.471113 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-19 15:00:46.471124 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-19 15:00:46.471137 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-19 15:00:46.471158 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-19 15:00:46.471169 | orchestrator | 2025-05-19 15:00:46.471182 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-19 15:00:46.471195 | orchestrator | Monday 19 May 2025 14:57:33 +0000 (0:00:16.145) 0:01:28.789 ************ 2025-05-19 15:00:46.471208 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471220 | orchestrator | 2025-05-19 15:00:46.471233 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-19 15:00:46.471245 | orchestrator | Monday 19 May 2025 14:57:37 +0000 (0:00:04.459) 0:01:33.248 ************ 2025-05-19 15:00:46.471257 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471269 | orchestrator | 2025-05-19 15:00:46.471281 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-19 15:00:46.471293 | orchestrator | Monday 19 May 2025 14:57:42 +0000 (0:00:05.073) 0:01:38.322 ************ 2025-05-19 15:00:46.471305 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.471318 | orchestrator | 2025-05-19 15:00:46.471330 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-19 15:00:46.471342 | orchestrator | Monday 19 May 2025 14:57:42 +0000 (0:00:00.189) 0:01:38.512 ************ 2025-05-19 15:00:46.471355 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471367 | orchestrator | 2025-05-19 15:00:46.471380 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.471392 | orchestrator | Monday 19 May 2025 14:57:47 +0000 (0:00:04.734) 0:01:43.246 ************ 2025-05-19 15:00:46.471404 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:00:46.471416 | orchestrator | 2025-05-19 15:00:46.471428 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-19 15:00:46.471441 | orchestrator | Monday 19 May 2025 14:57:48 +0000 (0:00:01.103) 0:01:44.350 ************ 2025-05-19 15:00:46.471453 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471465 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471479 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471491 | orchestrator | 2025-05-19 15:00:46.471503 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-19 15:00:46.471515 | orchestrator | Monday 19 May 2025 14:57:53 +0000 (0:00:05.020) 0:01:49.370 ************ 2025-05-19 15:00:46.471528 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471539 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471550 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471561 | orchestrator | 2025-05-19 15:00:46.471571 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-19 15:00:46.471582 | orchestrator | Monday 19 May 2025 14:57:58 +0000 (0:00:04.434) 0:01:53.805 ************ 2025-05-19 15:00:46.471593 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471603 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471620 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471631 | orchestrator | 2025-05-19 15:00:46.471642 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-19 15:00:46.471652 | orchestrator | Monday 19 May 2025 14:57:59 +0000 (0:00:00.817) 0:01:54.623 ************ 2025-05-19 15:00:46.471663 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.471674 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:00:46.471684 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:00:46.471695 | orchestrator | 2025-05-19 15:00:46.471706 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-19 15:00:46.471716 | orchestrator | Monday 19 May 2025 14:58:01 +0000 (0:00:02.180) 0:01:56.803 ************ 2025-05-19 15:00:46.471727 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471738 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471755 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471766 | orchestrator | 2025-05-19 15:00:46.471777 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-19 15:00:46.471787 | orchestrator | Monday 19 May 2025 14:58:02 +0000 (0:00:01.221) 0:01:58.025 ************ 2025-05-19 15:00:46.471798 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471808 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471819 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471829 | orchestrator | 2025-05-19 15:00:46.471840 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-19 15:00:46.471851 | orchestrator | Monday 19 May 2025 14:58:03 +0000 (0:00:01.124) 0:01:59.149 ************ 2025-05-19 15:00:46.471862 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.471872 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471883 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.471894 | orchestrator | 2025-05-19 15:00:46.471937 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-19 15:00:46.471950 | orchestrator | Monday 19 May 2025 14:58:05 +0000 (0:00:01.829) 0:02:00.979 ************ 2025-05-19 15:00:46.471960 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.471971 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.472047 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.472059 | orchestrator | 2025-05-19 15:00:46.472070 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-19 15:00:46.472081 | orchestrator | Monday 19 May 2025 14:58:07 +0000 (0:00:01.682) 0:02:02.661 ************ 2025-05-19 15:00:46.472091 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472102 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:00:46.472113 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:00:46.472124 | orchestrator | 2025-05-19 15:00:46.472134 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-19 15:00:46.472145 | orchestrator | Monday 19 May 2025 14:58:07 +0000 (0:00:00.619) 0:02:03.280 ************ 2025-05-19 15:00:46.472156 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:00:46.472166 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:00:46.472177 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472187 | orchestrator | 2025-05-19 15:00:46.472198 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.472209 | orchestrator | Monday 19 May 2025 14:58:10 +0000 (0:00:02.881) 0:02:06.162 ************ 2025-05-19 15:00:46.472220 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:00:46.472231 | orchestrator | 2025-05-19 15:00:46.472242 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-19 15:00:46.472252 | orchestrator | Monday 19 May 2025 14:58:11 +0000 (0:00:00.636) 0:02:06.799 ************ 2025-05-19 15:00:46.472263 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472274 | orchestrator | 2025-05-19 15:00:46.472284 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-19 15:00:46.472295 | orchestrator | Monday 19 May 2025 14:58:14 +0000 (0:00:03.748) 0:02:10.547 ************ 2025-05-19 15:00:46.472306 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472316 | orchestrator | 2025-05-19 15:00:46.472327 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-19 15:00:46.472338 | orchestrator | Monday 19 May 2025 14:58:17 +0000 (0:00:02.969) 0:02:13.516 ************ 2025-05-19 15:00:46.472348 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-19 15:00:46.472359 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-19 15:00:46.472370 | orchestrator | 2025-05-19 15:00:46.472381 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-19 15:00:46.472391 | orchestrator | Monday 19 May 2025 14:58:24 +0000 (0:00:06.721) 0:02:20.238 ************ 2025-05-19 15:00:46.472402 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472413 | orchestrator | 2025-05-19 15:00:46.472432 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-19 15:00:46.472443 | orchestrator | Monday 19 May 2025 14:58:27 +0000 (0:00:03.234) 0:02:23.472 ************ 2025-05-19 15:00:46.472453 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:00:46.472464 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:00:46.472475 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:00:46.472484 | orchestrator | 2025-05-19 15:00:46.472494 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-19 15:00:46.472504 | orchestrator | Monday 19 May 2025 14:58:28 +0000 (0:00:00.297) 0:02:23.770 ************ 2025-05-19 15:00:46.472522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.472567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.472580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.472591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.472602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.472619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.472636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.472769 | orchestrator | 2025-05-19 15:00:46.472779 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-19 15:00:46.472789 | orchestrator | Monday 19 May 2025 14:58:30 +0000 (0:00:02.573) 0:02:26.343 ************ 2025-05-19 15:00:46.472799 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.472808 | orchestrator | 2025-05-19 15:00:46.472858 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-19 15:00:46.472877 | orchestrator | Monday 19 May 2025 14:58:31 +0000 (0:00:00.318) 0:02:26.662 ************ 2025-05-19 15:00:46.472895 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.472911 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:00:46.472927 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:00:46.472938 | orchestrator | 2025-05-19 15:00:46.472948 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-19 15:00:46.472957 | orchestrator | Monday 19 May 2025 14:58:31 +0000 (0:00:00.277) 0:02:26.939 ************ 2025-05-19 15:00:46.472967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473051 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.473094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473152 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:00:46.473172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473261 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:00:46.473271 | orchestrator | 2025-05-19 15:00:46.473280 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.473290 | orchestrator | Monday 19 May 2025 14:58:31 +0000 (0:00:00.618) 0:02:27.557 ************ 2025-05-19 15:00:46.473299 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:00:46.473309 | orchestrator | 2025-05-19 15:00:46.473319 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-19 15:00:46.473328 | orchestrator | Monday 19 May 2025 14:58:32 +0000 (0:00:00.494) 0:02:28.052 ************ 2025-05-19 15:00:46.473342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.473382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.473400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.473410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.473420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.473430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.473444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.473564 | orchestrator | 2025-05-19 15:00:46.473574 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-19 15:00:46.473584 | orchestrator | Monday 19 May 2025 14:58:37 +0000 (0:00:04.977) 0:02:33.030 ************ 2025-05-19 15:00:46.473594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473648 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.473664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473720 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:00:46.473766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473835 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:00:46.473845 | orchestrator | 2025-05-19 15:00:46.473855 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-19 15:00:46.473865 | orchestrator | Monday 19 May 2025 14:58:38 +0000 (0:00:00.612) 0:02:33.642 ************ 2025-05-19 15:00:46.473880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.473933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.473943 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.473953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.473963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.473998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.474075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.474096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.474107 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:00:46.474118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 15:00:46.474128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 15:00:46.474138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.474153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 15:00:46.474171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 15:00:46.474182 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:00:46.474192 | orchestrator | 2025-05-19 15:00:46.474202 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-19 15:00:46.474211 | orchestrator | Monday 19 May 2025 14:58:38 +0000 (0:00:00.816) 0:02:34.459 ************ 2025-05-19 15:00:46.474228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474422 | orchestrator | 2025-05-19 15:00:46.474431 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-19 15:00:46.474441 | orchestrator | Monday 19 May 2025 14:58:43 +0000 (0:00:05.036) 0:02:39.495 ************ 2025-05-19 15:00:46.474451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 15:00:46.474461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 15:00:46.474471 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 15:00:46.474480 | orchestrator | 2025-05-19 15:00:46.474490 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-19 15:00:46.474507 | orchestrator | Monday 19 May 2025 14:58:45 +0000 (0:00:01.691) 0:02:41.187 ************ 2025-05-19 15:00:46.474517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.474560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.474597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.474708 | orchestrator | 2025-05-19 15:00:46.474718 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-19 15:00:46.474727 | orchestrator | Monday 19 May 2025 14:59:02 +0000 (0:00:16.476) 0:02:57.663 ************ 2025-05-19 15:00:46.474737 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.474747 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.474757 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.474766 | orchestrator | 2025-05-19 15:00:46.474776 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-19 15:00:46.474785 | orchestrator | Monday 19 May 2025 14:59:03 +0000 (0:00:01.434) 0:02:59.098 ************ 2025-05-19 15:00:46.474795 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.474804 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.474819 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.474829 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.474839 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.474848 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.474858 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.474867 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.474877 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.474886 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 15:00:46.474896 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 15:00:46.474905 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 15:00:46.474915 | orchestrator | 2025-05-19 15:00:46.474924 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-19 15:00:46.474939 | orchestrator | Monday 19 May 2025 14:59:08 +0000 (0:00:04.891) 0:03:03.989 ************ 2025-05-19 15:00:46.475061 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475073 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475083 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475092 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475102 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475111 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475120 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475130 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475140 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475149 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475159 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475168 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475178 | orchestrator | 2025-05-19 15:00:46.475187 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-19 15:00:46.475197 | orchestrator | Monday 19 May 2025 14:59:13 +0000 (0:00:04.782) 0:03:08.771 ************ 2025-05-19 15:00:46.475206 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475215 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475225 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 15:00:46.475234 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475244 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475253 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 15:00:46.475263 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475272 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475282 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 15:00:46.475291 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475300 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475310 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 15:00:46.475320 | orchestrator | 2025-05-19 15:00:46.475329 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-19 15:00:46.475339 | orchestrator | Monday 19 May 2025 14:59:18 +0000 (0:00:05.006) 0:03:13.777 ************ 2025-05-19 15:00:46.475356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.475375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.475396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 15:00:46.475406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.475417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.475427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 15:00:46.475441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 15:00:46.475560 | orchestrator | 2025-05-19 15:00:46.475570 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 15:00:46.475580 | orchestrator | Monday 19 May 2025 14:59:21 +0000 (0:00:03.369) 0:03:17.147 ************ 2025-05-19 15:00:46.475590 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:00:46.475600 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:00:46.475609 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:00:46.475619 | orchestrator | 2025-05-19 15:00:46.475628 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-19 15:00:46.475638 | orchestrator | Monday 19 May 2025 14:59:21 +0000 (0:00:00.281) 0:03:17.429 ************ 2025-05-19 15:00:46.475647 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475657 | orchestrator | 2025-05-19 15:00:46.475666 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-19 15:00:46.475676 | orchestrator | Monday 19 May 2025 14:59:24 +0000 (0:00:02.358) 0:03:19.788 ************ 2025-05-19 15:00:46.475685 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475695 | orchestrator | 2025-05-19 15:00:46.475704 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-19 15:00:46.475714 | orchestrator | Monday 19 May 2025 14:59:26 +0000 (0:00:02.068) 0:03:21.856 ************ 2025-05-19 15:00:46.475724 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475733 | orchestrator | 2025-05-19 15:00:46.475742 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-19 15:00:46.475752 | orchestrator | Monday 19 May 2025 14:59:28 +0000 (0:00:02.077) 0:03:23.934 ************ 2025-05-19 15:00:46.475800 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475812 | orchestrator | 2025-05-19 15:00:46.475822 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-19 15:00:46.475832 | orchestrator | Monday 19 May 2025 14:59:30 +0000 (0:00:02.002) 0:03:25.937 ************ 2025-05-19 15:00:46.475841 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475851 | orchestrator | 2025-05-19 15:00:46.475860 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 15:00:46.475870 | orchestrator | Monday 19 May 2025 14:59:50 +0000 (0:00:19.951) 0:03:45.889 ************ 2025-05-19 15:00:46.475880 | orchestrator | 2025-05-19 15:00:46.475889 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 15:00:46.475898 | orchestrator | Monday 19 May 2025 14:59:50 +0000 (0:00:00.078) 0:03:45.967 ************ 2025-05-19 15:00:46.475908 | orchestrator | 2025-05-19 15:00:46.475917 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 15:00:46.475927 | orchestrator | Monday 19 May 2025 14:59:50 +0000 (0:00:00.078) 0:03:46.046 ************ 2025-05-19 15:00:46.475937 | orchestrator | 2025-05-19 15:00:46.475946 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-19 15:00:46.475956 | orchestrator | Monday 19 May 2025 14:59:50 +0000 (0:00:00.063) 0:03:46.109 ************ 2025-05-19 15:00:46.475965 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.475975 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.476010 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.476019 | orchestrator | 2025-05-19 15:00:46.476036 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-19 15:00:46.476046 | orchestrator | Monday 19 May 2025 15:00:01 +0000 (0:00:11.393) 0:03:57.503 ************ 2025-05-19 15:00:46.476056 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.476066 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.476075 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.476085 | orchestrator | 2025-05-19 15:00:46.476095 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-19 15:00:46.476104 | orchestrator | Monday 19 May 2025 15:00:13 +0000 (0:00:11.572) 0:04:09.076 ************ 2025-05-19 15:00:46.476114 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.476123 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.476133 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.476142 | orchestrator | 2025-05-19 15:00:46.476152 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-19 15:00:46.476161 | orchestrator | Monday 19 May 2025 15:00:23 +0000 (0:00:09.925) 0:04:19.002 ************ 2025-05-19 15:00:46.476176 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.476185 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.476195 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.476204 | orchestrator | 2025-05-19 15:00:46.476214 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-19 15:00:46.476223 | orchestrator | Monday 19 May 2025 15:00:33 +0000 (0:00:10.456) 0:04:29.459 ************ 2025-05-19 15:00:46.476233 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:00:46.476242 | orchestrator | changed: [testbed-node-1] 2025-05-19 15:00:46.476252 | orchestrator | changed: [testbed-node-2] 2025-05-19 15:00:46.476261 | orchestrator | 2025-05-19 15:00:46.476271 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:00:46.476281 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 15:00:46.476291 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 15:00:46.476301 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 15:00:46.476310 | orchestrator | 2025-05-19 15:00:46.476320 | orchestrator | 2025-05-19 15:00:46.476329 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:00:46.476339 | orchestrator | Monday 19 May 2025 15:00:44 +0000 (0:00:10.389) 0:04:39.848 ************ 2025-05-19 15:00:46.476354 | orchestrator | =============================================================================== 2025-05-19 15:00:46.476364 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.95s 2025-05-19 15:00:46.476392 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.48s 2025-05-19 15:00:46.476402 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.15s 2025-05-19 15:00:46.476412 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.84s 2025-05-19 15:00:46.476421 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.57s 2025-05-19 15:00:46.476431 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.39s 2025-05-19 15:00:46.476441 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.67s 2025-05-19 15:00:46.476450 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.46s 2025-05-19 15:00:46.476459 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.39s 2025-05-19 15:00:46.476469 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.93s 2025-05-19 15:00:46.476479 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.76s 2025-05-19 15:00:46.476488 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.19s 2025-05-19 15:00:46.476504 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.72s 2025-05-19 15:00:46.476514 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.58s 2025-05-19 15:00:46.476523 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.62s 2025-05-19 15:00:46.476533 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.07s 2025-05-19 15:00:46.476543 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.04s 2025-05-19 15:00:46.476552 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.02s 2025-05-19 15:00:46.476562 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.01s 2025-05-19 15:00:46.476571 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.98s 2025-05-19 15:00:46.476581 | orchestrator | 2025-05-19 15:00:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:00:49.509413 | orchestrator | 2025-05-19 15:00:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:00:52.558266 | orchestrator | 2025-05-19 15:00:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:00:55.606857 | orchestrator | 2025-05-19 15:00:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:00:58.651609 | orchestrator | 2025-05-19 15:00:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:01.711114 | orchestrator | 2025-05-19 15:01:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:04.760611 | orchestrator | 2025-05-19 15:01:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:07.812806 | orchestrator | 2025-05-19 15:01:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:10.872609 | orchestrator | 2025-05-19 15:01:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:13.915918 | orchestrator | 2025-05-19 15:01:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:16.957781 | orchestrator | 2025-05-19 15:01:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:20.003692 | orchestrator | 2025-05-19 15:01:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:23.043383 | orchestrator | 2025-05-19 15:01:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:26.092055 | orchestrator | 2025-05-19 15:01:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:29.132828 | orchestrator | 2025-05-19 15:01:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:32.188577 | orchestrator | 2025-05-19 15:01:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:35.231840 | orchestrator | 2025-05-19 15:01:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:38.280744 | orchestrator | 2025-05-19 15:01:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:41.331107 | orchestrator | 2025-05-19 15:01:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:44.385777 | orchestrator | 2025-05-19 15:01:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 15:01:47.429905 | orchestrator | 2025-05-19 15:01:47.665459 | orchestrator | 2025-05-19 15:01:47.668293 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon May 19 15:01:47 UTC 2025 2025-05-19 15:01:47.668324 | orchestrator | 2025-05-19 15:01:48.128673 | orchestrator | ok: Runtime: 0:33:13.229170 2025-05-19 15:01:48.429620 | 2025-05-19 15:01:48.429810 | TASK [Bootstrap services] 2025-05-19 15:01:49.383290 | orchestrator | 2025-05-19 15:01:49.383567 | orchestrator | # BOOTSTRAP 2025-05-19 15:01:49.383593 | orchestrator | 2025-05-19 15:01:49.383608 | orchestrator | + set -e 2025-05-19 15:01:49.383621 | orchestrator | + echo 2025-05-19 15:01:49.383635 | orchestrator | + echo '# BOOTSTRAP' 2025-05-19 15:01:49.383653 | orchestrator | + echo 2025-05-19 15:01:49.383697 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-19 15:01:49.391364 | orchestrator | + set -e 2025-05-19 15:01:49.391406 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-19 15:01:51.138446 | orchestrator | 2025-05-19 15:01:51 | INFO  | It takes a moment until task b1b87a0b-0447-4f17-9530-dfce72637b53 (flavor-manager) has been started and output is visible here. 2025-05-19 15:01:55.218646 | orchestrator | 2025-05-19 15:01:55 | INFO  | Flavor SCS-1V-4 created 2025-05-19 15:01:55.416272 | orchestrator | 2025-05-19 15:01:55 | INFO  | Flavor SCS-2V-8 created 2025-05-19 15:01:55.609398 | orchestrator | 2025-05-19 15:01:55 | INFO  | Flavor SCS-4V-16 created 2025-05-19 15:01:55.746239 | orchestrator | 2025-05-19 15:01:55 | INFO  | Flavor SCS-8V-32 created 2025-05-19 15:01:55.881948 | orchestrator | 2025-05-19 15:01:55 | INFO  | Flavor SCS-1V-2 created 2025-05-19 15:01:56.017533 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-2V-4 created 2025-05-19 15:01:56.140468 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-4V-8 created 2025-05-19 15:01:56.256426 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-8V-16 created 2025-05-19 15:01:56.410652 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-16V-32 created 2025-05-19 15:01:56.541295 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-1V-8 created 2025-05-19 15:01:56.679034 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-2V-16 created 2025-05-19 15:01:56.809342 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-4V-32 created 2025-05-19 15:01:56.933867 | orchestrator | 2025-05-19 15:01:56 | INFO  | Flavor SCS-1L-1 created 2025-05-19 15:01:57.072483 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-2V-4-20s created 2025-05-19 15:01:57.210854 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-4V-16-100s created 2025-05-19 15:01:57.346807 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-1V-4-10 created 2025-05-19 15:01:57.491319 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-2V-8-20 created 2025-05-19 15:01:57.636467 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-4V-16-50 created 2025-05-19 15:01:57.770140 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-8V-32-100 created 2025-05-19 15:01:57.898557 | orchestrator | 2025-05-19 15:01:57 | INFO  | Flavor SCS-1V-2-5 created 2025-05-19 15:01:58.020103 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-2V-4-10 created 2025-05-19 15:01:58.150486 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-4V-8-20 created 2025-05-19 15:01:58.282134 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-8V-16-50 created 2025-05-19 15:01:58.415455 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-16V-32-100 created 2025-05-19 15:01:58.539136 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-1V-8-20 created 2025-05-19 15:01:58.685029 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-2V-16-50 created 2025-05-19 15:01:58.819088 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-4V-32-100 created 2025-05-19 15:01:58.941925 | orchestrator | 2025-05-19 15:01:58 | INFO  | Flavor SCS-1L-1-5 created 2025-05-19 15:02:01.120356 | orchestrator | 2025-05-19 15:02:01 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-19 15:02:01.177705 | orchestrator | 2025-05-19 15:02:01 | INFO  | Task 67ce6b9e-2dc8-4d4c-95be-c58240975dfa (bootstrap-basic) was prepared for execution. 2025-05-19 15:02:01.177834 | orchestrator | 2025-05-19 15:02:01 | INFO  | It takes a moment until task 67ce6b9e-2dc8-4d4c-95be-c58240975dfa (bootstrap-basic) has been started and output is visible here. 2025-05-19 15:02:05.292501 | orchestrator | 2025-05-19 15:02:05.292608 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-19 15:02:05.292624 | orchestrator | 2025-05-19 15:02:05.296916 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 15:02:05.301635 | orchestrator | Monday 19 May 2025 15:02:05 +0000 (0:00:00.075) 0:00:00.075 ************ 2025-05-19 15:02:07.087174 | orchestrator | ok: [localhost] 2025-05-19 15:02:07.087461 | orchestrator | 2025-05-19 15:02:07.087488 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-19 15:02:07.087751 | orchestrator | Monday 19 May 2025 15:02:07 +0000 (0:00:01.798) 0:00:01.873 ************ 2025-05-19 15:02:16.032446 | orchestrator | ok: [localhost] 2025-05-19 15:02:16.032622 | orchestrator | 2025-05-19 15:02:16.032642 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-19 15:02:16.032906 | orchestrator | Monday 19 May 2025 15:02:16 +0000 (0:00:08.946) 0:00:10.820 ************ 2025-05-19 15:02:22.730801 | orchestrator | changed: [localhost] 2025-05-19 15:02:22.731064 | orchestrator | 2025-05-19 15:02:22.731450 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-19 15:02:22.732413 | orchestrator | Monday 19 May 2025 15:02:22 +0000 (0:00:06.697) 0:00:17.517 ************ 2025-05-19 15:02:29.219342 | orchestrator | ok: [localhost] 2025-05-19 15:02:29.223033 | orchestrator | 2025-05-19 15:02:29.223608 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-19 15:02:29.224627 | orchestrator | Monday 19 May 2025 15:02:29 +0000 (0:00:06.483) 0:00:24.001 ************ 2025-05-19 15:02:36.585875 | orchestrator | changed: [localhost] 2025-05-19 15:02:36.586276 | orchestrator | 2025-05-19 15:02:36.587193 | orchestrator | TASK [Create public network] *************************************************** 2025-05-19 15:02:36.588615 | orchestrator | Monday 19 May 2025 15:02:36 +0000 (0:00:07.370) 0:00:31.372 ************ 2025-05-19 15:02:41.626543 | orchestrator | changed: [localhost] 2025-05-19 15:02:41.626701 | orchestrator | 2025-05-19 15:02:41.627671 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-19 15:02:41.629577 | orchestrator | Monday 19 May 2025 15:02:41 +0000 (0:00:05.040) 0:00:36.413 ************ 2025-05-19 15:02:47.524298 | orchestrator | changed: [localhost] 2025-05-19 15:02:47.524502 | orchestrator | 2025-05-19 15:02:47.524615 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-19 15:02:47.525348 | orchestrator | Monday 19 May 2025 15:02:47 +0000 (0:00:05.897) 0:00:42.310 ************ 2025-05-19 15:02:51.838653 | orchestrator | changed: [localhost] 2025-05-19 15:02:51.839519 | orchestrator | 2025-05-19 15:02:51.840433 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-19 15:02:51.842592 | orchestrator | Monday 19 May 2025 15:02:51 +0000 (0:00:04.314) 0:00:46.625 ************ 2025-05-19 15:02:55.569795 | orchestrator | changed: [localhost] 2025-05-19 15:02:55.570238 | orchestrator | 2025-05-19 15:02:55.571997 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-19 15:02:55.573051 | orchestrator | Monday 19 May 2025 15:02:55 +0000 (0:00:03.728) 0:00:50.354 ************ 2025-05-19 15:02:59.032742 | orchestrator | ok: [localhost] 2025-05-19 15:02:59.032852 | orchestrator | 2025-05-19 15:02:59.033520 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:02:59.034267 | orchestrator | 2025-05-19 15:02:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 15:02:59.034313 | orchestrator | 2025-05-19 15:02:59 | INFO  | Please wait and do not abort execution. 2025-05-19 15:02:59.035558 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 15:02:59.036385 | orchestrator | 2025-05-19 15:02:59.037592 | orchestrator | 2025-05-19 15:02:59.038214 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:02:59.038785 | orchestrator | Monday 19 May 2025 15:02:59 +0000 (0:00:03.463) 0:00:53.817 ************ 2025-05-19 15:02:59.039255 | orchestrator | =============================================================================== 2025-05-19 15:02:59.039795 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.95s 2025-05-19 15:02:59.040410 | orchestrator | Create volume type local ------------------------------------------------ 7.37s 2025-05-19 15:02:59.040753 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.70s 2025-05-19 15:02:59.041471 | orchestrator | Get volume type local --------------------------------------------------- 6.48s 2025-05-19 15:02:59.041878 | orchestrator | Set public network to default ------------------------------------------- 5.90s 2025-05-19 15:02:59.042678 | orchestrator | Create public network --------------------------------------------------- 5.04s 2025-05-19 15:02:59.043166 | orchestrator | Create public subnet ---------------------------------------------------- 4.31s 2025-05-19 15:02:59.043666 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.73s 2025-05-19 15:02:59.044058 | orchestrator | Create manager role ----------------------------------------------------- 3.46s 2025-05-19 15:02:59.044728 | orchestrator | Gathering Facts --------------------------------------------------------- 1.80s 2025-05-19 15:03:01.267142 | orchestrator | 2025-05-19 15:03:01 | INFO  | It takes a moment until task 93f62836-fbb2-4487-8427-d2b1f50fd202 (image-manager) has been started and output is visible here. 2025-05-19 15:03:04.624983 | orchestrator | 2025-05-19 15:03:04 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-19 15:03:04.863984 | orchestrator | 2025-05-19 15:03:04 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-19 15:03:04.864672 | orchestrator | 2025-05-19 15:03:04 | INFO  | Importing image Cirros 0.6.2 2025-05-19 15:03:04.864998 | orchestrator | 2025-05-19 15:03:04 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-19 15:03:06.498297 | orchestrator | 2025-05-19 15:03:06 | INFO  | Waiting for image to leave queued state... 2025-05-19 15:03:08.552537 | orchestrator | 2025-05-19 15:03:08 | INFO  | Waiting for import to complete... 2025-05-19 15:03:18.878119 | orchestrator | 2025-05-19 15:03:18 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-19 15:03:19.026900 | orchestrator | 2025-05-19 15:03:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-19 15:03:19.027582 | orchestrator | 2025-05-19 15:03:19 | INFO  | Setting internal_version = 0.6.2 2025-05-19 15:03:19.028350 | orchestrator | 2025-05-19 15:03:19 | INFO  | Setting image_original_user = cirros 2025-05-19 15:03:19.029507 | orchestrator | 2025-05-19 15:03:19 | INFO  | Adding tag os:cirros 2025-05-19 15:03:19.335657 | orchestrator | 2025-05-19 15:03:19 | INFO  | Setting property architecture: x86_64 2025-05-19 15:03:19.631156 | orchestrator | 2025-05-19 15:03:19 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 15:03:19.841283 | orchestrator | 2025-05-19 15:03:19 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 15:03:20.069394 | orchestrator | 2025-05-19 15:03:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 15:03:20.239965 | orchestrator | 2025-05-19 15:03:20 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 15:03:20.421139 | orchestrator | 2025-05-19 15:03:20 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 15:03:20.613291 | orchestrator | 2025-05-19 15:03:20 | INFO  | Setting property os_distro: cirros 2025-05-19 15:03:20.807210 | orchestrator | 2025-05-19 15:03:20 | INFO  | Setting property replace_frequency: never 2025-05-19 15:03:21.039240 | orchestrator | 2025-05-19 15:03:21 | INFO  | Setting property uuid_validity: none 2025-05-19 15:03:21.269260 | orchestrator | 2025-05-19 15:03:21 | INFO  | Setting property provided_until: none 2025-05-19 15:03:21.458566 | orchestrator | 2025-05-19 15:03:21 | INFO  | Setting property image_description: Cirros 2025-05-19 15:03:21.683206 | orchestrator | 2025-05-19 15:03:21 | INFO  | Setting property image_name: Cirros 2025-05-19 15:03:21.862308 | orchestrator | 2025-05-19 15:03:21 | INFO  | Setting property internal_version: 0.6.2 2025-05-19 15:03:22.097104 | orchestrator | 2025-05-19 15:03:22 | INFO  | Setting property image_original_user: cirros 2025-05-19 15:03:22.335076 | orchestrator | 2025-05-19 15:03:22 | INFO  | Setting property os_version: 0.6.2 2025-05-19 15:03:22.586567 | orchestrator | 2025-05-19 15:03:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-19 15:03:22.799576 | orchestrator | 2025-05-19 15:03:22 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-19 15:03:23.034234 | orchestrator | 2025-05-19 15:03:23 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-19 15:03:23.034770 | orchestrator | 2025-05-19 15:03:23 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-19 15:03:23.035723 | orchestrator | 2025-05-19 15:03:23 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-19 15:03:23.255643 | orchestrator | 2025-05-19 15:03:23 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-19 15:03:23.454544 | orchestrator | 2025-05-19 15:03:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-19 15:03:23.454756 | orchestrator | 2025-05-19 15:03:23 | INFO  | Importing image Cirros 0.6.3 2025-05-19 15:03:23.455690 | orchestrator | 2025-05-19 15:03:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-19 15:03:24.551306 | orchestrator | 2025-05-19 15:03:24 | INFO  | Waiting for image to leave queued state... 2025-05-19 15:03:26.595619 | orchestrator | 2025-05-19 15:03:26 | INFO  | Waiting for import to complete... 2025-05-19 15:03:36.908485 | orchestrator | 2025-05-19 15:03:36 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-19 15:03:37.198681 | orchestrator | 2025-05-19 15:03:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-19 15:03:37.198763 | orchestrator | 2025-05-19 15:03:37 | INFO  | Setting internal_version = 0.6.3 2025-05-19 15:03:37.199829 | orchestrator | 2025-05-19 15:03:37 | INFO  | Setting image_original_user = cirros 2025-05-19 15:03:37.201193 | orchestrator | 2025-05-19 15:03:37 | INFO  | Adding tag os:cirros 2025-05-19 15:03:37.464243 | orchestrator | 2025-05-19 15:03:37 | INFO  | Setting property architecture: x86_64 2025-05-19 15:03:37.677806 | orchestrator | 2025-05-19 15:03:37 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 15:03:37.915122 | orchestrator | 2025-05-19 15:03:37 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 15:03:38.139476 | orchestrator | 2025-05-19 15:03:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 15:03:38.349474 | orchestrator | 2025-05-19 15:03:38 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 15:03:38.526242 | orchestrator | 2025-05-19 15:03:38 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 15:03:38.722547 | orchestrator | 2025-05-19 15:03:38 | INFO  | Setting property os_distro: cirros 2025-05-19 15:03:38.936217 | orchestrator | 2025-05-19 15:03:38 | INFO  | Setting property replace_frequency: never 2025-05-19 15:03:39.120989 | orchestrator | 2025-05-19 15:03:39 | INFO  | Setting property uuid_validity: none 2025-05-19 15:03:39.330267 | orchestrator | 2025-05-19 15:03:39 | INFO  | Setting property provided_until: none 2025-05-19 15:03:39.538900 | orchestrator | 2025-05-19 15:03:39 | INFO  | Setting property image_description: Cirros 2025-05-19 15:03:39.752301 | orchestrator | 2025-05-19 15:03:39 | INFO  | Setting property image_name: Cirros 2025-05-19 15:03:39.974318 | orchestrator | 2025-05-19 15:03:39 | INFO  | Setting property internal_version: 0.6.3 2025-05-19 15:03:40.178952 | orchestrator | 2025-05-19 15:03:40 | INFO  | Setting property image_original_user: cirros 2025-05-19 15:03:40.377764 | orchestrator | 2025-05-19 15:03:40 | INFO  | Setting property os_version: 0.6.3 2025-05-19 15:03:40.587967 | orchestrator | 2025-05-19 15:03:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-19 15:03:40.779610 | orchestrator | 2025-05-19 15:03:40 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-19 15:03:40.998923 | orchestrator | 2025-05-19 15:03:40 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-19 15:03:41.000242 | orchestrator | 2025-05-19 15:03:40 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-19 15:03:41.001434 | orchestrator | 2025-05-19 15:03:40 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-19 15:03:41.981161 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-19 15:03:43.858198 | orchestrator | 2025-05-19 15:03:43 | INFO  | date: 2025-05-19 2025-05-19 15:03:43.858504 | orchestrator | 2025-05-19 15:03:43 | INFO  | image: octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 15:03:43.858525 | orchestrator | 2025-05-19 15:03:43 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 15:03:43.858549 | orchestrator | 2025-05-19 15:03:43 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2.CHECKSUM 2025-05-19 15:03:43.920280 | orchestrator | 2025-05-19 15:03:43 | INFO  | checksum: 182419243ca6dc3f15969fa524833c630d9964bbf1d84efd76eee941e0be38b4 2025-05-19 15:03:44.001596 | orchestrator | 2025-05-19 15:03:44 | INFO  | It takes a moment until task b94865fe-0c95-4413-84cc-f3dd14bb9528 (image-manager) has been started and output is visible here. 2025-05-19 15:03:46.574859 | orchestrator | 2025-05-19 15:03:46 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 15:03:46.593150 | orchestrator | 2025-05-19 15:03:46 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2: 200 2025-05-19 15:03:46.594970 | orchestrator | 2025-05-19 15:03:46 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-19 2025-05-19 15:03:46.595002 | orchestrator | 2025-05-19 15:03:46 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 15:03:47.705436 | orchestrator | 2025-05-19 15:03:47 | INFO  | Waiting for image to leave queued state... 2025-05-19 15:03:49.748948 | orchestrator | 2025-05-19 15:03:49 | INFO  | Waiting for import to complete... 2025-05-19 15:03:59.851945 | orchestrator | 2025-05-19 15:03:59 | INFO  | Waiting for import to complete... 2025-05-19 15:04:09.939496 | orchestrator | 2025-05-19 15:04:09 | INFO  | Waiting for import to complete... 2025-05-19 15:04:20.040099 | orchestrator | 2025-05-19 15:04:20 | INFO  | Waiting for import to complete... 2025-05-19 15:04:30.131594 | orchestrator | 2025-05-19 15:04:30 | INFO  | Waiting for import to complete... 2025-05-19 15:04:40.250345 | orchestrator | 2025-05-19 15:04:40 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-19' successfully completed, reloading images 2025-05-19 15:04:40.560786 | orchestrator | 2025-05-19 15:04:40 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 15:04:40.561745 | orchestrator | 2025-05-19 15:04:40 | INFO  | Setting internal_version = 2025-05-19 2025-05-19 15:04:40.562571 | orchestrator | 2025-05-19 15:04:40 | INFO  | Setting image_original_user = ubuntu 2025-05-19 15:04:40.563412 | orchestrator | 2025-05-19 15:04:40 | INFO  | Adding tag amphora 2025-05-19 15:04:40.767310 | orchestrator | 2025-05-19 15:04:40 | INFO  | Adding tag os:ubuntu 2025-05-19 15:04:40.942362 | orchestrator | 2025-05-19 15:04:40 | INFO  | Setting property architecture: x86_64 2025-05-19 15:04:41.243623 | orchestrator | 2025-05-19 15:04:41 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 15:04:41.459740 | orchestrator | 2025-05-19 15:04:41 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 15:04:41.641885 | orchestrator | 2025-05-19 15:04:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 15:04:41.832310 | orchestrator | 2025-05-19 15:04:41 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 15:04:42.218184 | orchestrator | 2025-05-19 15:04:42 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 15:04:42.438550 | orchestrator | 2025-05-19 15:04:42 | INFO  | Setting property os_distro: ubuntu 2025-05-19 15:04:42.600697 | orchestrator | 2025-05-19 15:04:42 | INFO  | Setting property replace_frequency: quarterly 2025-05-19 15:04:42.799274 | orchestrator | 2025-05-19 15:04:42 | INFO  | Setting property uuid_validity: last-1 2025-05-19 15:04:43.018605 | orchestrator | 2025-05-19 15:04:43 | INFO  | Setting property provided_until: none 2025-05-19 15:04:43.199055 | orchestrator | 2025-05-19 15:04:43 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-19 15:04:43.405255 | orchestrator | 2025-05-19 15:04:43 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-19 15:04:43.618902 | orchestrator | 2025-05-19 15:04:43 | INFO  | Setting property internal_version: 2025-05-19 2025-05-19 15:04:43.799460 | orchestrator | 2025-05-19 15:04:43 | INFO  | Setting property image_original_user: ubuntu 2025-05-19 15:04:44.048204 | orchestrator | 2025-05-19 15:04:44 | INFO  | Setting property os_version: 2025-05-19 2025-05-19 15:04:44.263175 | orchestrator | 2025-05-19 15:04:44 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 15:04:44.494562 | orchestrator | 2025-05-19 15:04:44 | INFO  | Setting property image_build_date: 2025-05-19 2025-05-19 15:04:44.709500 | orchestrator | 2025-05-19 15:04:44 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 15:04:44.709867 | orchestrator | 2025-05-19 15:04:44 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 15:04:44.906456 | orchestrator | 2025-05-19 15:04:44 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-19 15:04:44.907946 | orchestrator | 2025-05-19 15:04:44 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-19 15:04:44.908602 | orchestrator | 2025-05-19 15:04:44 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-19 15:04:44.909898 | orchestrator | 2025-05-19 15:04:44 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-19 15:04:45.602403 | orchestrator | ok: Runtime: 0:02:56.368419 2025-05-19 15:04:45.629993 | 2025-05-19 15:04:45.630234 | TASK [Run checks] 2025-05-19 15:04:46.355495 | orchestrator | + set -e 2025-05-19 15:04:46.355742 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 15:04:46.355773 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 15:04:46.355795 | orchestrator | ++ INTERACTIVE=false 2025-05-19 15:04:46.355809 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 15:04:46.355822 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 15:04:46.355837 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 15:04:46.356417 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 15:04:46.360270 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 15:04:46.360300 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 15:04:46.360312 | orchestrator | + echo 2025-05-19 15:04:46.360332 | orchestrator | 2025-05-19 15:04:46.360344 | orchestrator | # CHECK 2025-05-19 15:04:46.360355 | orchestrator | 2025-05-19 15:04:46.360380 | orchestrator | + echo '# CHECK' 2025-05-19 15:04:46.360391 | orchestrator | + echo 2025-05-19 15:04:46.360405 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 15:04:46.361189 | orchestrator | ++ semver latest 5.0.0 2025-05-19 15:04:46.418375 | orchestrator | 2025-05-19 15:04:46.418481 | orchestrator | ## Containers @ testbed-manager 2025-05-19 15:04:46.418496 | orchestrator | 2025-05-19 15:04:46.418511 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 15:04:46.418522 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 15:04:46.418533 | orchestrator | + echo 2025-05-19 15:04:46.418545 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-19 15:04:46.418557 | orchestrator | + echo 2025-05-19 15:04:46.418567 | orchestrator | + osism container testbed-manager ps 2025-05-19 15:04:48.499606 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 15:04:48.499746 | orchestrator | 5c6f8139257f registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-05-19 15:04:48.499771 | orchestrator | c3c42d711f39 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-05-19 15:04:48.499784 | orchestrator | 3b8e7e0556c6 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-19 15:04:48.499795 | orchestrator | 6633fb272344 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-19 15:04:48.499806 | orchestrator | 8d757db8ea27 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-05-19 15:04:48.499823 | orchestrator | eb84383b03db registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-05-19 15:04:48.499836 | orchestrator | f40c4f69a500 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-19 15:04:48.499848 | orchestrator | ccea2b8a6908 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-19 15:04:48.499859 | orchestrator | f199566768b4 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-19 15:04:48.499898 | orchestrator | abb68c92ebb0 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2025-05-19 15:04:48.499910 | orchestrator | 2a5c0d4771c8 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2025-05-19 15:04:48.499937 | orchestrator | 501641ce5d20 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2025-05-19 15:04:48.499948 | orchestrator | 44fe3fc096ef registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 49 minutes ago Up 48 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-19 15:04:48.499959 | orchestrator | 9c58111ed04e registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 53 minutes ago Up 52 minutes (healthy) manager-inventory_reconciler-1 2025-05-19 15:04:48.499970 | orchestrator | eb134347a1c2 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 53 minutes ago Up 52 minutes (healthy) osism-kubernetes 2025-05-19 15:04:48.499981 | orchestrator | a05d30e70971 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 53 minutes ago Up 52 minutes (healthy) osism-ansible 2025-05-19 15:04:48.499998 | orchestrator | cd626a9e6eef registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 53 minutes ago Up 52 minutes (healthy) kolla-ansible 2025-05-19 15:04:48.500938 | orchestrator | 3960f4dbc970 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 53 minutes ago Up 52 minutes (healthy) ceph-ansible 2025-05-19 15:04:48.500968 | orchestrator | edfba09966fc registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 53 minutes ago Up 53 minutes (healthy) 8000/tcp manager-ara-server-1 2025-05-19 15:04:48.500982 | orchestrator | 93481b6b71d8 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-conductor-1 2025-05-19 15:04:48.500995 | orchestrator | c00def26e5d9 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 53 minutes ago Up 53 minutes (healthy) 3306/tcp manager-mariadb-1 2025-05-19 15:04:48.501008 | orchestrator | 2f65d0222d7b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-listener-1 2025-05-19 15:04:48.501021 | orchestrator | da1316a81996 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 53 minutes ago Up 53 minutes (healthy) 6379/tcp manager-redis-1 2025-05-19 15:04:48.501048 | orchestrator | af57733a3804 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-openstack-1 2025-05-19 15:04:48.501061 | orchestrator | 66dbc6a86fa7 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-flower-1 2025-05-19 15:04:48.501074 | orchestrator | ab1fd1c0076a registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 53 minutes ago Up 53 minutes (healthy) osismclient 2025-05-19 15:04:48.501088 | orchestrator | 3009b520066c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-watchdog-1 2025-05-19 15:04:48.501122 | orchestrator | 97b50bbf00d0 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-beat-1 2025-05-19 15:04:48.501136 | orchestrator | 457a4110deb2 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Up 53 minutes (healthy) manager-netbox-1 2025-05-19 15:04:48.501149 | orchestrator | eedb133065df registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 53 minutes ago Restarting (0) 25 seconds ago manager-api-1 2025-05-19 15:04:48.501169 | orchestrator | 3ec7856e12e6 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" 59 minutes ago Up 54 minutes (healthy) netbox-netbox-worker-1 2025-05-19 15:04:48.501196 | orchestrator | 0fead815ba9b registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" 59 minutes ago Up 59 minutes (healthy) netbox-netbox-1 2025-05-19 15:04:48.501221 | orchestrator | a123e418ab3d registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 5432/tcp netbox-postgres-1 2025-05-19 15:04:48.501246 | orchestrator | 606e0e44c96c registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 6379/tcp netbox-redis-1 2025-05-19 15:04:48.501258 | orchestrator | de70f4a15305 registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-19 15:04:48.727207 | orchestrator | 2025-05-19 15:04:48.727319 | orchestrator | ## Images @ testbed-manager 2025-05-19 15:04:48.727336 | orchestrator | 2025-05-19 15:04:48.727348 | orchestrator | + echo 2025-05-19 15:04:48.727360 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-19 15:04:48.727372 | orchestrator | + echo 2025-05-19 15:04:48.727385 | orchestrator | + osism container testbed-manager images 2025-05-19 15:04:50.866301 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 15:04:50.866448 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest e55847d83aa4 2 hours ago 306MB 2025-05-19 15:04:50.866491 | orchestrator | registry.osism.tech/osism/osism-ansible latest 514d2985b124 3 hours ago 556MB 2025-05-19 15:04:50.866505 | orchestrator | registry.osism.tech/osism/homer v25.05.2 df83d86990c5 12 hours ago 11MB 2025-05-19 15:04:50.866515 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 ffbdd10a1d31 12 hours ago 225MB 2025-05-19 15:04:50.866526 | orchestrator | registry.osism.tech/osism/cephclient reef 274f9656897d 12 hours ago 453MB 2025-05-19 15:04:50.866537 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 13 hours ago 325MB 2025-05-19 15:04:50.866547 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 13 hours ago 635MB 2025-05-19 15:04:50.866558 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 13 hours ago 753MB 2025-05-19 15:04:50.866568 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 0095342773fe 13 hours ago 898MB 2025-05-19 15:04:50.866579 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 13 hours ago 417MB 2025-05-19 15:04:50.866591 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 18f9e2129e03 13 hours ago 463MB 2025-05-19 15:04:50.866602 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 13 hours ago 365MB 2025-05-19 15:04:50.866612 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 b4a68173fb3b 13 hours ago 367MB 2025-05-19 15:04:50.866623 | orchestrator | registry.osism.tech/osism/osism latest c1760045c1e2 15 hours ago 339MB 2025-05-19 15:04:50.866634 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 293cd8fb4739 15 hours ago 573MB 2025-05-19 15:04:50.866644 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 3a406958f36e 15 hours ago 1.2GB 2025-05-19 15:04:50.866655 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 4c0257b04a85 15 hours ago 537MB 2025-05-19 15:04:50.866666 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 10 days ago 275MB 2025-05-19 15:04:50.866676 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 2 weeks ago 224MB 2025-05-19 15:04:50.866687 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 2 weeks ago 504MB 2025-05-19 15:04:50.866698 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 3 weeks ago 41.4MB 2025-05-19 15:04:50.866709 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 6 weeks ago 817MB 2025-05-19 15:04:50.866719 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-05-19 15:04:50.866742 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 3 months ago 571MB 2025-05-19 15:04:50.866753 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-19 15:04:50.866764 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-19 15:04:51.183466 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 15:04:51.183767 | orchestrator | ++ semver latest 5.0.0 2025-05-19 15:04:51.234509 | orchestrator | 2025-05-19 15:04:51.234593 | orchestrator | ## Containers @ testbed-node-0 2025-05-19 15:04:51.234607 | orchestrator | 2025-05-19 15:04:51.234619 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 15:04:51.234630 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 15:04:51.234641 | orchestrator | + echo 2025-05-19 15:04:51.234653 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-19 15:04:51.234665 | orchestrator | + echo 2025-05-19 15:04:51.234676 | orchestrator | + osism container testbed-node-0 ps 2025-05-19 15:04:53.270661 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 15:04:53.270776 | orchestrator | 100cfeee2ff3 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 15:04:53.270801 | orchestrator | 0f789ef20413 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 15:04:53.270821 | orchestrator | e5bbcdb7aa47 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-19 15:04:53.270841 | orchestrator | fb96f7e38c74 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-19 15:04:53.270861 | orchestrator | 6c66b1dcb320 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-19 15:04:53.270884 | orchestrator | d30eb91ad2c8 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 15:04:53.270902 | orchestrator | 8964f356b9b8 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-19 15:04:53.270922 | orchestrator | 955b56cb708a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2025-05-19 15:04:53.270941 | orchestrator | 63804039fc75 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 15:04:53.270959 | orchestrator | d47602ab6d56 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 15:04:53.270978 | orchestrator | 58a7b4315d16 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-19 15:04:53.270998 | orchestrator | eb04b3c15e1e registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-19 15:04:53.271018 | orchestrator | 145110a9e303 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-19 15:04:53.271036 | orchestrator | 3129ac2a71a2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-19 15:04:53.271056 | orchestrator | 184ae4b4d1fe registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-19 15:04:53.271076 | orchestrator | 61517dbe4837 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-05-19 15:04:53.271170 | orchestrator | f0ec92dd0f57 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 15:04:53.271215 | orchestrator | 8cdd8bc0747e registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-19 15:04:53.271241 | orchestrator | b9e6c1718fe7 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2025-05-19 15:04:53.271292 | orchestrator | c712a8073727 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 15:04:53.271313 | orchestrator | cbb6acfc6c37 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-19 15:04:53.271330 | orchestrator | 1093182a3e9a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-19 15:04:53.271348 | orchestrator | ce8af754cac3 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-05-19 15:04:53.271367 | orchestrator | 8bef2d29fb2e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-19 15:04:53.271387 | orchestrator | 27a89dc5a166 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-19 15:04:53.271406 | orchestrator | 73d189214856 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-19 15:04:53.271424 | orchestrator | 436e0c3a7154 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-19 15:04:53.271443 | orchestrator | 351b0fcc377a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-05-19 15:04:53.271463 | orchestrator | 67053fd58fa9 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-19 15:04:53.271481 | orchestrator | 9b860998155e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-19 15:04:53.271499 | orchestrator | bb6bec4b9fd7 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-19 15:04:53.271511 | orchestrator | 1bd946be20a7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-05-19 15:04:53.271521 | orchestrator | 21ddad3ea001 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-05-19 15:04:53.271532 | orchestrator | 47dc2c6cf582 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-19 15:04:53.271543 | orchestrator | 863d5bd94bf0 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-19 15:04:53.271554 | orchestrator | a1ad908f777d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-19 15:04:53.271565 | orchestrator | c5145e113833 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-19 15:04:53.271576 | orchestrator | eeac2455ae1a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-19 15:04:53.271597 | orchestrator | 19e5d0c8a968 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-19 15:04:53.271608 | orchestrator | 2fc9f3980fde registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-05-19 15:04:53.271642 | orchestrator | bf0b6b3b0236 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-19 15:04:53.271654 | orchestrator | 78d9b09cd2d7 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-19 15:04:53.271671 | orchestrator | 4887bd0048f6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-19 15:04:53.271682 | orchestrator | abf1d3c3a2c8 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-05-19 15:04:53.271693 | orchestrator | 34cce568d75f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-05-19 15:04:53.271703 | orchestrator | c339d84240ad registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-05-19 15:04:53.271714 | orchestrator | a22936fb699a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-05-19 15:04:53.271728 | orchestrator | 12d7301fea7d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-05-19 15:04:53.271746 | orchestrator | e0ab3a844dce registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-19 15:04:53.271763 | orchestrator | b2bf3a63d2e7 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-19 15:04:53.271791 | orchestrator | 393656fe3f43 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-19 15:04:53.271810 | orchestrator | c949d1043286 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-19 15:04:53.271827 | orchestrator | 5329a23f110b registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-19 15:04:53.271845 | orchestrator | dd344e9cd519 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-05-19 15:04:53.271864 | orchestrator | 041f79438066 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-19 15:04:53.271882 | orchestrator | 59c7b5a1c7cf registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-19 15:04:53.271900 | orchestrator | 364160d19729 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-19 15:04:53.494535 | orchestrator | 2025-05-19 15:04:53.494638 | orchestrator | ## Images @ testbed-node-0 2025-05-19 15:04:53.494652 | orchestrator | 2025-05-19 15:04:53.494664 | orchestrator | + echo 2025-05-19 15:04:53.494677 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-19 15:04:53.494721 | orchestrator | + echo 2025-05-19 15:04:53.494734 | orchestrator | + osism container testbed-node-0 images 2025-05-19 15:04:55.508062 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 15:04:55.508260 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 12 hours ago 1.27GB 2025-05-19 15:04:55.508278 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 13 hours ago 325MB 2025-05-19 15:04:55.508289 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 13 hours ago 1.02GB 2025-05-19 15:04:55.508299 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 13 hours ago 325MB 2025-05-19 15:04:55.508308 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 13 hours ago 425MB 2025-05-19 15:04:55.508318 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 13 hours ago 635MB 2025-05-19 15:04:55.508327 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 13 hours ago 333MB 2025-05-19 15:04:55.508336 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 13 hours ago 753MB 2025-05-19 15:04:55.508346 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 13 hours ago 336MB 2025-05-19 15:04:55.508355 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 13 hours ago 382MB 2025-05-19 15:04:55.508365 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 13 hours ago 1.56GB 2025-05-19 15:04:55.508374 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 13 hours ago 1.6GB 2025-05-19 15:04:55.508383 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 13 hours ago 1.22GB 2025-05-19 15:04:55.508393 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 13 hours ago 331MB 2025-05-19 15:04:55.508407 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 13 hours ago 331MB 2025-05-19 15:04:55.508421 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 13 hours ago 597MB 2025-05-19 15:04:55.508430 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 13 hours ago 358MB 2025-05-19 15:04:55.508440 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 13 hours ago 417MB 2025-05-19 15:04:55.508449 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 13 hours ago 351MB 2025-05-19 15:04:55.508458 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 13 hours ago 360MB 2025-05-19 15:04:55.508468 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 13 hours ago 365MB 2025-05-19 15:04:55.508477 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 13 hours ago 368MB 2025-05-19 15:04:55.508486 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 13 hours ago 368MB 2025-05-19 15:04:55.508496 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 13 hours ago 1.25GB 2025-05-19 15:04:55.508505 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 13 hours ago 1.14GB 2025-05-19 15:04:55.508514 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 13 hours ago 1.11GB 2025-05-19 15:04:55.508524 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 13 hours ago 1.12GB 2025-05-19 15:04:55.508540 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 13 hours ago 1.31GB 2025-05-19 15:04:55.508570 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 13 hours ago 1.2GB 2025-05-19 15:04:55.508580 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 13 hours ago 1.05GB 2025-05-19 15:04:55.508589 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 13 hours ago 1.16GB 2025-05-19 15:04:55.508599 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 13 hours ago 1.43GB 2025-05-19 15:04:55.508608 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 13 hours ago 1.3GB 2025-05-19 15:04:55.508617 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 13 hours ago 1.3GB 2025-05-19 15:04:55.508627 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 13 hours ago 1.3GB 2025-05-19 15:04:55.508636 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 af49ea0edd22 13 hours ago 1.05GB 2025-05-19 15:04:55.508663 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 bbfe1a7ce4ee 13 hours ago 1.05GB 2025-05-19 15:04:55.508674 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 13 hours ago 1.06GB 2025-05-19 15:04:55.508683 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 13 hours ago 1.06GB 2025-05-19 15:04:55.508697 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 13 hours ago 1.06GB 2025-05-19 15:04:55.508713 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 13 hours ago 1.06GB 2025-05-19 15:04:55.508728 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 13 hours ago 1.06GB 2025-05-19 15:04:55.508738 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 13 hours ago 1.06GB 2025-05-19 15:04:55.508747 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 13 hours ago 1.41GB 2025-05-19 15:04:55.508757 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 13 hours ago 1.41GB 2025-05-19 15:04:55.508766 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 da9d8a7c6e9c 13 hours ago 1.05GB 2025-05-19 15:04:55.508776 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 01f8d389f8cb 13 hours ago 1.05GB 2025-05-19 15:04:55.508785 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 291e9335d6ec 13 hours ago 1.05GB 2025-05-19 15:04:55.508794 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 66fe060a7ad0 13 hours ago 1.05GB 2025-05-19 15:04:55.508804 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 13 hours ago 1.13GB 2025-05-19 15:04:55.508813 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 13 hours ago 1.1GB 2025-05-19 15:04:55.508822 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 13 hours ago 1.1GB 2025-05-19 15:04:55.508832 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 13 hours ago 1.1GB 2025-05-19 15:04:55.508847 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 13 hours ago 1.13GB 2025-05-19 15:04:55.508864 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 a0c2f314ca0b 13 hours ago 1.11GB 2025-05-19 15:04:55.508881 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 b10e6b6ed9bf 13 hours ago 1.12GB 2025-05-19 15:04:55.508897 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 13 hours ago 1.06GB 2025-05-19 15:04:55.508913 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 13 hours ago 1.07GB 2025-05-19 15:04:55.508940 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 13 hours ago 1.07GB 2025-05-19 15:04:55.508964 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 13 hours ago 953MB 2025-05-19 15:04:55.508980 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 13 hours ago 953MB 2025-05-19 15:04:55.508997 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 13 hours ago 954MB 2025-05-19 15:04:55.509007 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 13 hours ago 954MB 2025-05-19 15:04:55.733251 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 15:04:55.733888 | orchestrator | ++ semver latest 5.0.0 2025-05-19 15:04:55.790222 | orchestrator | 2025-05-19 15:04:55.790316 | orchestrator | ## Containers @ testbed-node-1 2025-05-19 15:04:55.790333 | orchestrator | 2025-05-19 15:04:55.790345 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 15:04:55.790356 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 15:04:55.790367 | orchestrator | + echo 2025-05-19 15:04:55.790379 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-19 15:04:55.790391 | orchestrator | + echo 2025-05-19 15:04:55.790757 | orchestrator | + osism container testbed-node-1 ps 2025-05-19 15:04:57.883480 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 15:04:57.883591 | orchestrator | cdcb952ab25f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 15:04:57.883608 | orchestrator | 57b3fa6d3d49 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 15:04:57.883620 | orchestrator | 82babe3cb812 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-19 15:04:57.883631 | orchestrator | b1fa31d91f0c registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-19 15:04:57.883642 | orchestrator | 0ede19fc5e9d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-19 15:04:57.883660 | orchestrator | 3c948779a885 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-05-19 15:04:57.883671 | orchestrator | 9d0077ffca41 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 15:04:57.883682 | orchestrator | 915e853bcd53 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-19 15:04:57.883693 | orchestrator | 7edc143c2b33 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 15:04:57.883704 | orchestrator | ad3ba1059db5 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 15:04:57.883715 | orchestrator | e7d7da105deb registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-19 15:04:57.883726 | orchestrator | dcd37b1bd21c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-19 15:04:57.883737 | orchestrator | 2fd196e37f46 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-19 15:04:57.883769 | orchestrator | 97eac5b13eee registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-19 15:04:57.883780 | orchestrator | 423aaaba8128 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-05-19 15:04:57.883791 | orchestrator | 56b81f8d1d90 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-19 15:04:57.883812 | orchestrator | 48c6c9bb85c8 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-05-19 15:04:57.883823 | orchestrator | 9ac7439ae729 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 15:04:57.883834 | orchestrator | 5d3e43fbe603 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-19 15:04:57.883844 | orchestrator | 32ad68862730 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 15:04:57.883855 | orchestrator | cf4dda8b1ea8 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-19 15:04:57.883883 | orchestrator | abcf51b7cc7b registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-19 15:04:57.883894 | orchestrator | 33d73e925b0a registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-19 15:04:57.883905 | orchestrator | 80f05f614f1d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-19 15:04:57.883917 | orchestrator | 4c8d51ebacbd registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-19 15:04:57.883927 | orchestrator | caf58d96145b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-19 15:04:57.883938 | orchestrator | 7cb570e06a88 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-19 15:04:57.883949 | orchestrator | 39dfd30a5354 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-05-19 15:04:57.883961 | orchestrator | 992efcf902f8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-19 15:04:57.883979 | orchestrator | eb1b70de05ac registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-19 15:04:57.883999 | orchestrator | 9826f9ecfb92 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-19 15:04:57.884017 | orchestrator | 43a7ea1264ad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-05-19 15:04:57.884046 | orchestrator | 374d5817daa1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-05-19 15:04:57.884066 | orchestrator | b6def3a251cc registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-19 15:04:57.884085 | orchestrator | 7a2f67793634 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-19 15:04:57.884103 | orchestrator | 81b45498e00e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-19 15:04:57.884154 | orchestrator | 05b02b12bad3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-05-19 15:04:57.884174 | orchestrator | 67ad453e4230 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-19 15:04:57.884194 | orchestrator | cc75c5d828ae registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-05-19 15:04:57.884214 | orchestrator | 59ca97b40cd3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2025-05-19 15:04:57.884241 | orchestrator | 25719bfd95d5 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-19 15:04:57.884261 | orchestrator | 70535c78447d registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-19 15:04:57.884281 | orchestrator | cf9dc23b3388 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-19 15:04:57.884299 | orchestrator | 1dc292b10df6 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-05-19 15:04:57.884330 | orchestrator | 6dc64627d6e0 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-05-19 15:04:57.884342 | orchestrator | 3cfd3d11acd5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-05-19 15:04:57.884352 | orchestrator | 3dd627aa5fdc registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-05-19 15:04:57.884363 | orchestrator | e6b0a054ee84 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-19 15:04:57.884374 | orchestrator | 64cc773a9c53 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-05-19 15:04:57.884385 | orchestrator | ac5ef4ab1500 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-19 15:04:57.884395 | orchestrator | f68078bd0b35 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-19 15:04:57.884406 | orchestrator | 9fada457ddb2 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-19 15:04:57.884426 | orchestrator | db894af2af09 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-19 15:04:57.884437 | orchestrator | 86a7d17835cd registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-05-19 15:04:57.884448 | orchestrator | aadde889f5c5 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-19 15:04:57.884458 | orchestrator | e6850c3d3e49 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-19 15:04:57.884469 | orchestrator | 0b9dce266cf7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-19 15:04:58.121846 | orchestrator | 2025-05-19 15:04:58.121944 | orchestrator | ## Images @ testbed-node-1 2025-05-19 15:04:58.121959 | orchestrator | 2025-05-19 15:04:58.121971 | orchestrator | + echo 2025-05-19 15:04:58.121982 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-19 15:04:58.121995 | orchestrator | + echo 2025-05-19 15:04:58.122006 | orchestrator | + osism container testbed-node-1 images 2025-05-19 15:05:00.178848 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 15:05:00.178942 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 12 hours ago 1.27GB 2025-05-19 15:05:00.178957 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 13 hours ago 325MB 2025-05-19 15:05:00.178968 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 13 hours ago 1.02GB 2025-05-19 15:05:00.178979 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 13 hours ago 325MB 2025-05-19 15:05:00.178989 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 13 hours ago 425MB 2025-05-19 15:05:00.179000 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 13 hours ago 635MB 2025-05-19 15:05:00.179010 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 13 hours ago 333MB 2025-05-19 15:05:00.179021 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 13 hours ago 753MB 2025-05-19 15:05:00.179031 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 13 hours ago 336MB 2025-05-19 15:05:00.179042 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 13 hours ago 382MB 2025-05-19 15:05:00.179052 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 13 hours ago 1.56GB 2025-05-19 15:05:00.179063 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 13 hours ago 1.6GB 2025-05-19 15:05:00.179073 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 13 hours ago 1.22GB 2025-05-19 15:05:00.179084 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 13 hours ago 331MB 2025-05-19 15:05:00.179094 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 13 hours ago 331MB 2025-05-19 15:05:00.179105 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 13 hours ago 597MB 2025-05-19 15:05:00.179169 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 13 hours ago 358MB 2025-05-19 15:05:00.179182 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 13 hours ago 417MB 2025-05-19 15:05:00.179192 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 13 hours ago 351MB 2025-05-19 15:05:00.179222 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 13 hours ago 360MB 2025-05-19 15:05:00.179233 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 13 hours ago 365MB 2025-05-19 15:05:00.179244 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 13 hours ago 368MB 2025-05-19 15:05:00.179254 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 13 hours ago 368MB 2025-05-19 15:05:00.179265 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 13 hours ago 1.25GB 2025-05-19 15:05:00.179276 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 13 hours ago 1.14GB 2025-05-19 15:05:00.179286 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 13 hours ago 1.11GB 2025-05-19 15:05:00.179297 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 13 hours ago 1.12GB 2025-05-19 15:05:00.179307 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 13 hours ago 1.31GB 2025-05-19 15:05:00.179318 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 13 hours ago 1.2GB 2025-05-19 15:05:00.179329 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 13 hours ago 1.05GB 2025-05-19 15:05:00.179340 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 13 hours ago 1.16GB 2025-05-19 15:05:00.179350 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 13 hours ago 1.43GB 2025-05-19 15:05:00.179361 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 13 hours ago 1.3GB 2025-05-19 15:05:00.179371 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 13 hours ago 1.3GB 2025-05-19 15:05:00.179382 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 13 hours ago 1.3GB 2025-05-19 15:05:00.179392 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 13 hours ago 1.06GB 2025-05-19 15:05:00.179424 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 13 hours ago 1.06GB 2025-05-19 15:05:00.179447 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 13 hours ago 1.06GB 2025-05-19 15:05:00.179468 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 13 hours ago 1.06GB 2025-05-19 15:05:00.179489 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 13 hours ago 1.06GB 2025-05-19 15:05:00.179509 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 13 hours ago 1.06GB 2025-05-19 15:05:00.179527 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 13 hours ago 1.41GB 2025-05-19 15:05:00.179544 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 13 hours ago 1.41GB 2025-05-19 15:05:00.179555 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 13 hours ago 1.13GB 2025-05-19 15:05:00.179566 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 13 hours ago 1.1GB 2025-05-19 15:05:00.179577 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 13 hours ago 1.1GB 2025-05-19 15:05:00.179587 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 13 hours ago 1.1GB 2025-05-19 15:05:00.179598 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 13 hours ago 1.13GB 2025-05-19 15:05:00.179616 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 13 hours ago 1.06GB 2025-05-19 15:05:00.179627 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 13 hours ago 1.07GB 2025-05-19 15:05:00.179637 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 13 hours ago 1.07GB 2025-05-19 15:05:00.179648 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 13 hours ago 953MB 2025-05-19 15:05:00.179658 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 13 hours ago 953MB 2025-05-19 15:05:00.179669 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 13 hours ago 954MB 2025-05-19 15:05:00.179679 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 13 hours ago 954MB 2025-05-19 15:05:00.441733 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 15:05:00.442315 | orchestrator | ++ semver latest 5.0.0 2025-05-19 15:05:00.502575 | orchestrator | 2025-05-19 15:05:00.502658 | orchestrator | ## Containers @ testbed-node-2 2025-05-19 15:05:00.502673 | orchestrator | 2025-05-19 15:05:00.502685 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 15:05:00.502696 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 15:05:00.502707 | orchestrator | + echo 2025-05-19 15:05:00.502718 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-19 15:05:00.502730 | orchestrator | + echo 2025-05-19 15:05:00.502740 | orchestrator | + osism container testbed-node-2 ps 2025-05-19 15:05:02.576351 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 15:05:02.576458 | orchestrator | bec974da2d08 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 15:05:02.576475 | orchestrator | 3f8eab24290b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 15:05:02.576487 | orchestrator | f5e05c3c8908 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-19 15:05:02.576498 | orchestrator | 9b2cc30813b0 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-19 15:05:02.576530 | orchestrator | 9883a1ca8e3b registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-19 15:05:02.576542 | orchestrator | b0c632a063f2 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-19 15:05:02.576553 | orchestrator | f8291e340809 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 15:05:02.576564 | orchestrator | 3d82735fa107 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-19 15:05:02.576574 | orchestrator | b8b5b31a93ef registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 15:05:02.576585 | orchestrator | f31a52d48165 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 15:05:02.576595 | orchestrator | e442a5fbe3e9 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-19 15:05:02.576606 | orchestrator | febb0f65049c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-19 15:05:02.576643 | orchestrator | 0dbf95a4f182 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-19 15:05:02.576654 | orchestrator | be87f2d306c8 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-19 15:05:02.576665 | orchestrator | 335404e36efd registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-19 15:05:02.576676 | orchestrator | da1bd3ecfb29 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-19 15:05:02.576687 | orchestrator | e9ac674dc66a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-19 15:05:02.576698 | orchestrator | 10201ae93878 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 15:05:02.576708 | orchestrator | 50fe2fc47467 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-19 15:05:02.576719 | orchestrator | 368b47214b18 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 15:05:02.576729 | orchestrator | 71ffefd43087 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-19 15:05:02.576758 | orchestrator | 9550a02757fc registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-19 15:05:02.576770 | orchestrator | 04cec9060288 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-19 15:05:02.576781 | orchestrator | 16088023de36 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-19 15:05:02.576793 | orchestrator | aaa8f6ee9f1c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-19 15:05:02.576803 | orchestrator | b74b210366bd registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-19 15:05:02.576814 | orchestrator | b1abb7eb48cb registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-19 15:05:02.576825 | orchestrator | eedcdf73794a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-19 15:05:02.576835 | orchestrator | 3ccfb4db2bbe registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-19 15:05:02.576846 | orchestrator | d7c706c878de registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-19 15:05:02.576857 | orchestrator | cc69e5d0a999 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-19 15:05:02.576874 | orchestrator | c11f55fe5db2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-05-19 15:05:02.576887 | orchestrator | c6d70527fe05 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-19 15:05:02.576900 | orchestrator | a1b4dc1a9cfc registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-19 15:05:02.576913 | orchestrator | f02752bbf91d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-19 15:05:02.576925 | orchestrator | 1c2858ae0035 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-19 15:05:02.576937 | orchestrator | df231e490a3b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-05-19 15:05:02.576950 | orchestrator | 70ecf20631f3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-19 15:05:02.576968 | orchestrator | b2a04606984d registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-05-19 15:05:02.576981 | orchestrator | 0e034b141494 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2025-05-19 15:05:02.576995 | orchestrator | f11e64960c0e registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-19 15:05:02.577008 | orchestrator | d5f543c23008 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-05-19 15:05:02.577020 | orchestrator | 5f3fbcd628fb registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-05-19 15:05:02.577033 | orchestrator | 449fd56c4458 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-05-19 15:05:02.577053 | orchestrator | b6755a154e9e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-05-19 15:05:02.577066 | orchestrator | 8b60eed1fd6b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-05-19 15:05:02.577080 | orchestrator | 89cc0b619980 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-05-19 15:05:02.577093 | orchestrator | e6d3f64ec260 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-05-19 15:05:02.577105 | orchestrator | 8a4e8a83abfb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-05-19 15:05:02.577170 | orchestrator | 280b21f2639f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-19 15:05:02.577184 | orchestrator | 6105ddd48bc3 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-05-19 15:05:02.577204 | orchestrator | 94d152c84e2d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-05-19 15:05:02.577217 | orchestrator | a11913038850 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-05-19 15:05:02.577230 | orchestrator | 5859f860d77a registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-05-19 15:05:02.577243 | orchestrator | 34d2e4b93654 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-19 15:05:02.577254 | orchestrator | 38ab970acf09 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-05-19 15:05:02.577265 | orchestrator | 6c28634b21ad registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-19 15:05:02.795837 | orchestrator | 2025-05-19 15:05:02.795935 | orchestrator | ## Images @ testbed-node-2 2025-05-19 15:05:02.795956 | orchestrator | 2025-05-19 15:05:02.795973 | orchestrator | + echo 2025-05-19 15:05:02.795990 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-19 15:05:02.796008 | orchestrator | + echo 2025-05-19 15:05:02.796024 | orchestrator | + osism container testbed-node-2 images 2025-05-19 15:05:04.916599 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 15:05:04.916708 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 12 hours ago 1.27GB 2025-05-19 15:05:04.916723 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 13 hours ago 325MB 2025-05-19 15:05:04.916737 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 13 hours ago 1.02GB 2025-05-19 15:05:04.916756 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 13 hours ago 325MB 2025-05-19 15:05:04.916768 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 13 hours ago 425MB 2025-05-19 15:05:04.916779 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 13 hours ago 635MB 2025-05-19 15:05:04.916790 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 13 hours ago 333MB 2025-05-19 15:05:04.916800 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 13 hours ago 753MB 2025-05-19 15:05:04.916811 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 13 hours ago 336MB 2025-05-19 15:05:04.916821 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 13 hours ago 382MB 2025-05-19 15:05:04.916832 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 13 hours ago 1.56GB 2025-05-19 15:05:04.916843 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 13 hours ago 1.6GB 2025-05-19 15:05:04.916853 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 13 hours ago 1.22GB 2025-05-19 15:05:04.916864 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 13 hours ago 331MB 2025-05-19 15:05:04.916874 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 13 hours ago 331MB 2025-05-19 15:05:04.916885 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 13 hours ago 597MB 2025-05-19 15:05:04.916895 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 13 hours ago 358MB 2025-05-19 15:05:04.916930 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 13 hours ago 417MB 2025-05-19 15:05:04.916942 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 13 hours ago 351MB 2025-05-19 15:05:04.916952 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 13 hours ago 360MB 2025-05-19 15:05:04.916963 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 13 hours ago 365MB 2025-05-19 15:05:04.916973 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 13 hours ago 368MB 2025-05-19 15:05:04.916984 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 13 hours ago 368MB 2025-05-19 15:05:04.916994 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 13 hours ago 1.25GB 2025-05-19 15:05:04.917005 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 13 hours ago 1.14GB 2025-05-19 15:05:04.917015 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 13 hours ago 1.11GB 2025-05-19 15:05:04.917026 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 13 hours ago 1.12GB 2025-05-19 15:05:04.917036 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 13 hours ago 1.31GB 2025-05-19 15:05:04.917047 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 13 hours ago 1.2GB 2025-05-19 15:05:04.917057 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 13 hours ago 1.05GB 2025-05-19 15:05:04.917067 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 13 hours ago 1.16GB 2025-05-19 15:05:04.917078 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 13 hours ago 1.43GB 2025-05-19 15:05:04.917105 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 13 hours ago 1.3GB 2025-05-19 15:05:04.917151 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 13 hours ago 1.3GB 2025-05-19 15:05:04.917166 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 13 hours ago 1.3GB 2025-05-19 15:05:04.917179 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 13 hours ago 1.06GB 2025-05-19 15:05:04.917215 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 13 hours ago 1.06GB 2025-05-19 15:05:04.917228 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 13 hours ago 1.06GB 2025-05-19 15:05:04.917241 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 13 hours ago 1.06GB 2025-05-19 15:05:04.917254 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 13 hours ago 1.06GB 2025-05-19 15:05:04.917266 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 13 hours ago 1.06GB 2025-05-19 15:05:04.917279 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 13 hours ago 1.41GB 2025-05-19 15:05:04.917291 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 13 hours ago 1.41GB 2025-05-19 15:05:04.917304 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 13 hours ago 1.13GB 2025-05-19 15:05:04.917316 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 13 hours ago 1.1GB 2025-05-19 15:05:04.917328 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 13 hours ago 1.1GB 2025-05-19 15:05:04.917341 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 13 hours ago 1.1GB 2025-05-19 15:05:04.917361 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 13 hours ago 1.13GB 2025-05-19 15:05:04.917374 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 13 hours ago 1.06GB 2025-05-19 15:05:04.917386 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 13 hours ago 1.07GB 2025-05-19 15:05:04.917399 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 13 hours ago 1.07GB 2025-05-19 15:05:04.917411 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 13 hours ago 953MB 2025-05-19 15:05:04.917425 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 13 hours ago 953MB 2025-05-19 15:05:04.917438 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 13 hours ago 954MB 2025-05-19 15:05:04.917451 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 13 hours ago 954MB 2025-05-19 15:05:05.160452 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-19 15:05:05.169808 | orchestrator | + set -e 2025-05-19 15:05:05.169861 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 15:05:05.170877 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 15:05:05.170901 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 15:05:05.170927 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 15:05:05.170934 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 15:05:05.170959 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 15:05:05.170991 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 15:05:05.170999 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 15:05:05.171021 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 15:05:05.171027 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 15:05:05.171033 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 15:05:05.171039 | orchestrator | ++ export ARA=false 2025-05-19 15:05:05.171044 | orchestrator | ++ ARA=false 2025-05-19 15:05:05.171050 | orchestrator | ++ export TEMPEST=false 2025-05-19 15:05:05.171055 | orchestrator | ++ TEMPEST=false 2025-05-19 15:05:05.171061 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 15:05:05.171066 | orchestrator | ++ IS_ZUUL=true 2025-05-19 15:05:05.171071 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 15:05:05.171077 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 15:05:05.171083 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 15:05:05.171088 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 15:05:05.171093 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 15:05:05.171098 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 15:05:05.171104 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 15:05:05.171109 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 15:05:05.171127 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 15:05:05.171133 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 15:05:05.171174 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 15:05:05.171181 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-19 15:05:05.181430 | orchestrator | + set -e 2025-05-19 15:05:05.182552 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 15:05:05.182582 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 15:05:05.182594 | orchestrator | ++ INTERACTIVE=false 2025-05-19 15:05:05.182605 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 15:05:05.182615 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 15:05:05.182626 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 15:05:05.183185 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 15:05:05.189636 | orchestrator | 2025-05-19 15:05:05.189669 | orchestrator | # Ceph status 2025-05-19 15:05:05.189681 | orchestrator | 2025-05-19 15:05:05.189692 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 15:05:05.189704 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 15:05:05.189714 | orchestrator | + echo 2025-05-19 15:05:05.189725 | orchestrator | + echo '# Ceph status' 2025-05-19 15:05:05.189736 | orchestrator | + echo 2025-05-19 15:05:05.189747 | orchestrator | + ceph -s 2025-05-19 15:05:05.732093 | orchestrator | cluster: 2025-05-19 15:05:05.732231 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-19 15:05:05.732247 | orchestrator | health: HEALTH_OK 2025-05-19 15:05:05.732290 | orchestrator | 2025-05-19 15:05:05.732303 | orchestrator | services: 2025-05-19 15:05:05.732314 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-05-19 15:05:05.732327 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-05-19 15:05:05.732339 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-19 15:05:05.732350 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2025-05-19 15:05:05.732361 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-19 15:05:05.732372 | orchestrator | 2025-05-19 15:05:05.732384 | orchestrator | data: 2025-05-19 15:05:05.732395 | orchestrator | volumes: 1/1 healthy 2025-05-19 15:05:05.732405 | orchestrator | pools: 14 pools, 401 pgs 2025-05-19 15:05:05.732417 | orchestrator | objects: 524 objects, 2.2 GiB 2025-05-19 15:05:05.732428 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-19 15:05:05.732440 | orchestrator | pgs: 401 active+clean 2025-05-19 15:05:05.732451 | orchestrator | 2025-05-19 15:05:05.786308 | orchestrator | 2025-05-19 15:05:05.786408 | orchestrator | # Ceph versions 2025-05-19 15:05:05.786427 | orchestrator | 2025-05-19 15:05:05.786442 | orchestrator | + echo 2025-05-19 15:05:05.786456 | orchestrator | + echo '# Ceph versions' 2025-05-19 15:05:05.786471 | orchestrator | + echo 2025-05-19 15:05:05.786484 | orchestrator | + ceph versions 2025-05-19 15:05:06.317550 | orchestrator | { 2025-05-19 15:05:06.317656 | orchestrator | "mon": { 2025-05-19 15:05:06.317674 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 15:05:06.317687 | orchestrator | }, 2025-05-19 15:05:06.317699 | orchestrator | "mgr": { 2025-05-19 15:05:06.317710 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 15:05:06.317721 | orchestrator | }, 2025-05-19 15:05:06.317731 | orchestrator | "osd": { 2025-05-19 15:05:06.317742 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-19 15:05:06.317752 | orchestrator | }, 2025-05-19 15:05:06.317763 | orchestrator | "mds": { 2025-05-19 15:05:06.317774 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 15:05:06.317784 | orchestrator | }, 2025-05-19 15:05:06.317795 | orchestrator | "rgw": { 2025-05-19 15:05:06.317805 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 15:05:06.317816 | orchestrator | }, 2025-05-19 15:05:06.317827 | orchestrator | "overall": { 2025-05-19 15:05:06.317837 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-19 15:05:06.317848 | orchestrator | } 2025-05-19 15:05:06.317859 | orchestrator | } 2025-05-19 15:05:06.361224 | orchestrator | 2025-05-19 15:05:06.361319 | orchestrator | # Ceph OSD tree 2025-05-19 15:05:06.361335 | orchestrator | 2025-05-19 15:05:06.361348 | orchestrator | + echo 2025-05-19 15:05:06.361359 | orchestrator | + echo '# Ceph OSD tree' 2025-05-19 15:05:06.361371 | orchestrator | + echo 2025-05-19 15:05:06.361382 | orchestrator | + ceph osd df tree 2025-05-19 15:05:06.885037 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-19 15:05:06.885210 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-05-19 15:05:06.885239 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-05-19 15:05:06.885252 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.42 0.92 190 up osd.0 2025-05-19 15:05:06.885272 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.42 1.09 202 up osd.4 2025-05-19 15:05:06.885290 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-19 15:05:06.885308 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.48 0.93 209 up osd.1 2025-05-19 15:05:06.885326 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.36 1.07 181 up osd.3 2025-05-19 15:05:06.885345 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2025-05-19 15:05:06.885398 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.34 1.07 191 up osd.2 2025-05-19 15:05:06.885410 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.48 0.93 197 up osd.5 2025-05-19 15:05:06.885421 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-05-19 15:05:06.885432 | orchestrator | MIN/MAX VAR: 0.92/1.09 STDDEV: 0.46 2025-05-19 15:05:06.935961 | orchestrator | 2025-05-19 15:05:06.936060 | orchestrator | # Ceph monitor status 2025-05-19 15:05:06.936076 | orchestrator | 2025-05-19 15:05:06.936088 | orchestrator | + echo 2025-05-19 15:05:06.936100 | orchestrator | + echo '# Ceph monitor status' 2025-05-19 15:05:06.936111 | orchestrator | + echo 2025-05-19 15:05:06.936181 | orchestrator | + ceph mon stat 2025-05-19 15:05:07.519778 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-19 15:05:07.568620 | orchestrator | 2025-05-19 15:05:07.568695 | orchestrator | # Ceph quorum status 2025-05-19 15:05:07.568708 | orchestrator | 2025-05-19 15:05:07.568720 | orchestrator | + echo 2025-05-19 15:05:07.568731 | orchestrator | + echo '# Ceph quorum status' 2025-05-19 15:05:07.568742 | orchestrator | + echo 2025-05-19 15:05:07.569282 | orchestrator | + ceph quorum_status 2025-05-19 15:05:07.569306 | orchestrator | + jq 2025-05-19 15:05:08.217560 | orchestrator | { 2025-05-19 15:05:08.217662 | orchestrator | "election_epoch": 8, 2025-05-19 15:05:08.217679 | orchestrator | "quorum": [ 2025-05-19 15:05:08.217691 | orchestrator | 0, 2025-05-19 15:05:08.217702 | orchestrator | 1, 2025-05-19 15:05:08.217713 | orchestrator | 2 2025-05-19 15:05:08.217724 | orchestrator | ], 2025-05-19 15:05:08.217735 | orchestrator | "quorum_names": [ 2025-05-19 15:05:08.217746 | orchestrator | "testbed-node-0", 2025-05-19 15:05:08.217756 | orchestrator | "testbed-node-1", 2025-05-19 15:05:08.217767 | orchestrator | "testbed-node-2" 2025-05-19 15:05:08.217778 | orchestrator | ], 2025-05-19 15:05:08.217789 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-19 15:05:08.217801 | orchestrator | "quorum_age": 1584, 2025-05-19 15:05:08.217812 | orchestrator | "features": { 2025-05-19 15:05:08.217823 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-19 15:05:08.217833 | orchestrator | "quorum_mon": [ 2025-05-19 15:05:08.217844 | orchestrator | "kraken", 2025-05-19 15:05:08.217855 | orchestrator | "luminous", 2025-05-19 15:05:08.217866 | orchestrator | "mimic", 2025-05-19 15:05:08.217877 | orchestrator | "osdmap-prune", 2025-05-19 15:05:08.217888 | orchestrator | "nautilus", 2025-05-19 15:05:08.217898 | orchestrator | "octopus", 2025-05-19 15:05:08.217909 | orchestrator | "pacific", 2025-05-19 15:05:08.217920 | orchestrator | "elector-pinging", 2025-05-19 15:05:08.217952 | orchestrator | "quincy", 2025-05-19 15:05:08.217963 | orchestrator | "reef" 2025-05-19 15:05:08.217974 | orchestrator | ] 2025-05-19 15:05:08.217985 | orchestrator | }, 2025-05-19 15:05:08.217995 | orchestrator | "monmap": { 2025-05-19 15:05:08.218006 | orchestrator | "epoch": 1, 2025-05-19 15:05:08.218074 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-19 15:05:08.218089 | orchestrator | "modified": "2025-05-19T14:38:27.531993Z", 2025-05-19 15:05:08.218101 | orchestrator | "created": "2025-05-19T14:38:27.531993Z", 2025-05-19 15:05:08.218113 | orchestrator | "min_mon_release": 18, 2025-05-19 15:05:08.218160 | orchestrator | "min_mon_release_name": "reef", 2025-05-19 15:05:08.218180 | orchestrator | "election_strategy": 1, 2025-05-19 15:05:08.218194 | orchestrator | "disallowed_leaders: ": "", 2025-05-19 15:05:08.218208 | orchestrator | "stretch_mode": false, 2025-05-19 15:05:08.218220 | orchestrator | "tiebreaker_mon": "", 2025-05-19 15:05:08.218233 | orchestrator | "removed_ranks: ": "", 2025-05-19 15:05:08.218245 | orchestrator | "features": { 2025-05-19 15:05:08.218256 | orchestrator | "persistent": [ 2025-05-19 15:05:08.218269 | orchestrator | "kraken", 2025-05-19 15:05:08.218281 | orchestrator | "luminous", 2025-05-19 15:05:08.218293 | orchestrator | "mimic", 2025-05-19 15:05:08.218305 | orchestrator | "osdmap-prune", 2025-05-19 15:05:08.218317 | orchestrator | "nautilus", 2025-05-19 15:05:08.218330 | orchestrator | "octopus", 2025-05-19 15:05:08.218365 | orchestrator | "pacific", 2025-05-19 15:05:08.218378 | orchestrator | "elector-pinging", 2025-05-19 15:05:08.218390 | orchestrator | "quincy", 2025-05-19 15:05:08.218403 | orchestrator | "reef" 2025-05-19 15:05:08.218415 | orchestrator | ], 2025-05-19 15:05:08.218428 | orchestrator | "optional": [] 2025-05-19 15:05:08.218440 | orchestrator | }, 2025-05-19 15:05:08.218452 | orchestrator | "mons": [ 2025-05-19 15:05:08.218465 | orchestrator | { 2025-05-19 15:05:08.218477 | orchestrator | "rank": 0, 2025-05-19 15:05:08.218488 | orchestrator | "name": "testbed-node-0", 2025-05-19 15:05:08.218499 | orchestrator | "public_addrs": { 2025-05-19 15:05:08.218509 | orchestrator | "addrvec": [ 2025-05-19 15:05:08.218520 | orchestrator | { 2025-05-19 15:05:08.218530 | orchestrator | "type": "v2", 2025-05-19 15:05:08.218547 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-19 15:05:08.218559 | orchestrator | "nonce": 0 2025-05-19 15:05:08.218569 | orchestrator | }, 2025-05-19 15:05:08.218580 | orchestrator | { 2025-05-19 15:05:08.218590 | orchestrator | "type": "v1", 2025-05-19 15:05:08.218601 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-19 15:05:08.218611 | orchestrator | "nonce": 0 2025-05-19 15:05:08.218622 | orchestrator | } 2025-05-19 15:05:08.218633 | orchestrator | ] 2025-05-19 15:05:08.218643 | orchestrator | }, 2025-05-19 15:05:08.218654 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-19 15:05:08.218664 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-19 15:05:08.218675 | orchestrator | "priority": 0, 2025-05-19 15:05:08.218686 | orchestrator | "weight": 0, 2025-05-19 15:05:08.218697 | orchestrator | "crush_location": "{}" 2025-05-19 15:05:08.218707 | orchestrator | }, 2025-05-19 15:05:08.218718 | orchestrator | { 2025-05-19 15:05:08.218728 | orchestrator | "rank": 1, 2025-05-19 15:05:08.218739 | orchestrator | "name": "testbed-node-1", 2025-05-19 15:05:08.218749 | orchestrator | "public_addrs": { 2025-05-19 15:05:08.218760 | orchestrator | "addrvec": [ 2025-05-19 15:05:08.218771 | orchestrator | { 2025-05-19 15:05:08.218781 | orchestrator | "type": "v2", 2025-05-19 15:05:08.218792 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-19 15:05:08.218803 | orchestrator | "nonce": 0 2025-05-19 15:05:08.218813 | orchestrator | }, 2025-05-19 15:05:08.218824 | orchestrator | { 2025-05-19 15:05:08.218834 | orchestrator | "type": "v1", 2025-05-19 15:05:08.218845 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-19 15:05:08.218855 | orchestrator | "nonce": 0 2025-05-19 15:05:08.218866 | orchestrator | } 2025-05-19 15:05:08.218877 | orchestrator | ] 2025-05-19 15:05:08.218887 | orchestrator | }, 2025-05-19 15:05:08.218898 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-19 15:05:08.218909 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-19 15:05:08.218919 | orchestrator | "priority": 0, 2025-05-19 15:05:08.218930 | orchestrator | "weight": 0, 2025-05-19 15:05:08.218940 | orchestrator | "crush_location": "{}" 2025-05-19 15:05:08.218951 | orchestrator | }, 2025-05-19 15:05:08.218962 | orchestrator | { 2025-05-19 15:05:08.218973 | orchestrator | "rank": 2, 2025-05-19 15:05:08.218983 | orchestrator | "name": "testbed-node-2", 2025-05-19 15:05:08.218994 | orchestrator | "public_addrs": { 2025-05-19 15:05:08.219004 | orchestrator | "addrvec": [ 2025-05-19 15:05:08.219015 | orchestrator | { 2025-05-19 15:05:08.219025 | orchestrator | "type": "v2", 2025-05-19 15:05:08.219036 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-19 15:05:08.219047 | orchestrator | "nonce": 0 2025-05-19 15:05:08.219057 | orchestrator | }, 2025-05-19 15:05:08.219068 | orchestrator | { 2025-05-19 15:05:08.219078 | orchestrator | "type": "v1", 2025-05-19 15:05:08.219089 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-19 15:05:08.219100 | orchestrator | "nonce": 0 2025-05-19 15:05:08.219110 | orchestrator | } 2025-05-19 15:05:08.219142 | orchestrator | ] 2025-05-19 15:05:08.219154 | orchestrator | }, 2025-05-19 15:05:08.219164 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-19 15:05:08.219175 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-19 15:05:08.219186 | orchestrator | "priority": 0, 2025-05-19 15:05:08.219197 | orchestrator | "weight": 0, 2025-05-19 15:05:08.219207 | orchestrator | "crush_location": "{}" 2025-05-19 15:05:08.219218 | orchestrator | } 2025-05-19 15:05:08.219228 | orchestrator | ] 2025-05-19 15:05:08.219239 | orchestrator | } 2025-05-19 15:05:08.219257 | orchestrator | } 2025-05-19 15:05:08.219268 | orchestrator | 2025-05-19 15:05:08.219279 | orchestrator | # Ceph free space status 2025-05-19 15:05:08.219290 | orchestrator | 2025-05-19 15:05:08.219301 | orchestrator | + echo 2025-05-19 15:05:08.219312 | orchestrator | + echo '# Ceph free space status' 2025-05-19 15:05:08.219323 | orchestrator | + echo 2025-05-19 15:05:08.219334 | orchestrator | + ceph df 2025-05-19 15:05:08.796067 | orchestrator | --- RAW STORAGE --- 2025-05-19 15:05:08.796214 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-19 15:05:08.796243 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-05-19 15:05:08.796255 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-05-19 15:05:08.796266 | orchestrator | 2025-05-19 15:05:08.796278 | orchestrator | --- POOLS --- 2025-05-19 15:05:08.796290 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-19 15:05:08.796302 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-19 15:05:08.796314 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-19 15:05:08.796325 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-19 15:05:08.796335 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-19 15:05:08.796346 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-19 15:05:08.796357 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-19 15:05:08.796367 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-05-19 15:05:08.796378 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-19 15:05:08.796389 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-05-19 15:05:08.796399 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 15:05:08.796410 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 15:05:08.796420 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.90 35 GiB 2025-05-19 15:05:08.796431 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 15:05:08.796442 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 15:05:08.844112 | orchestrator | ++ semver latest 5.0.0 2025-05-19 15:05:08.901426 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 15:05:08.901579 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 15:05:08.901591 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-19 15:05:08.901600 | orchestrator | + osism apply facts 2025-05-19 15:05:10.635566 | orchestrator | 2025-05-19 15:05:10 | INFO  | Task 72bf952a-d141-4b33-9394-ec1a41e57eb2 (facts) was prepared for execution. 2025-05-19 15:05:10.635665 | orchestrator | 2025-05-19 15:05:10 | INFO  | It takes a moment until task 72bf952a-d141-4b33-9394-ec1a41e57eb2 (facts) has been started and output is visible here. 2025-05-19 15:05:14.685345 | orchestrator | 2025-05-19 15:05:14.685488 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 15:05:14.685511 | orchestrator | 2025-05-19 15:05:14.686574 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 15:05:14.687039 | orchestrator | Monday 19 May 2025 15:05:14 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-19 15:05:15.305727 | orchestrator | ok: [testbed-manager] 2025-05-19 15:05:15.786146 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:15.787594 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:15.788295 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:15.789797 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:05:15.791177 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:05:15.792877 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:05:15.798717 | orchestrator | 2025-05-19 15:05:15.799623 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 15:05:15.801662 | orchestrator | Monday 19 May 2025 15:05:15 +0000 (0:00:01.097) 0:00:01.358 ************ 2025-05-19 15:05:15.954537 | orchestrator | skipping: [testbed-manager] 2025-05-19 15:05:16.038852 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:16.117520 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:05:16.191179 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:05:16.283545 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:05:17.037052 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:05:17.037429 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:05:17.041607 | orchestrator | 2025-05-19 15:05:17.041692 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 15:05:17.041708 | orchestrator | 2025-05-19 15:05:17.042560 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 15:05:17.042894 | orchestrator | Monday 19 May 2025 15:05:17 +0000 (0:00:01.248) 0:00:02.606 ************ 2025-05-19 15:05:22.198925 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:22.199064 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:22.200121 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:22.201285 | orchestrator | ok: [testbed-manager] 2025-05-19 15:05:22.202429 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:05:22.203251 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:05:22.204160 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:05:22.205097 | orchestrator | 2025-05-19 15:05:22.205627 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 15:05:22.206399 | orchestrator | 2025-05-19 15:05:22.206824 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 15:05:22.207394 | orchestrator | Monday 19 May 2025 15:05:22 +0000 (0:00:05.170) 0:00:07.776 ************ 2025-05-19 15:05:22.355455 | orchestrator | skipping: [testbed-manager] 2025-05-19 15:05:22.430309 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:22.504425 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:05:22.580832 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:05:22.657633 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:05:22.695312 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:05:22.695716 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:05:22.697264 | orchestrator | 2025-05-19 15:05:22.697762 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:05:22.698242 | orchestrator | 2025-05-19 15:05:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 15:05:22.699164 | orchestrator | 2025-05-19 15:05:22 | INFO  | Please wait and do not abort execution. 2025-05-19 15:05:22.700204 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.701024 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.702252 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.703005 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.703842 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.704553 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.705569 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:22.706216 | orchestrator | 2025-05-19 15:05:22.706872 | orchestrator | 2025-05-19 15:05:22.707304 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:05:22.707787 | orchestrator | Monday 19 May 2025 15:05:22 +0000 (0:00:00.496) 0:00:08.273 ************ 2025-05-19 15:05:22.708281 | orchestrator | =============================================================================== 2025-05-19 15:05:22.708657 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.17s 2025-05-19 15:05:22.709115 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-05-19 15:05:22.709627 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-05-19 15:05:22.710214 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-19 15:05:23.336125 | orchestrator | + osism validate ceph-mons 2025-05-19 15:05:43.105280 | orchestrator | 2025-05-19 15:05:43.105380 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-19 15:05:43.105397 | orchestrator | 2025-05-19 15:05:43.105409 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 15:05:43.105420 | orchestrator | Monday 19 May 2025 15:05:29 +0000 (0:00:00.433) 0:00:00.433 ************ 2025-05-19 15:05:43.105431 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.105443 | orchestrator | 2025-05-19 15:05:43.105453 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 15:05:43.105464 | orchestrator | Monday 19 May 2025 15:05:29 +0000 (0:00:00.604) 0:00:01.037 ************ 2025-05-19 15:05:43.105475 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.105486 | orchestrator | 2025-05-19 15:05:43.105514 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 15:05:43.105526 | orchestrator | Monday 19 May 2025 15:05:30 +0000 (0:00:00.813) 0:00:01.851 ************ 2025-05-19 15:05:43.105537 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.105549 | orchestrator | 2025-05-19 15:05:43.105560 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-19 15:05:43.105571 | orchestrator | Monday 19 May 2025 15:05:30 +0000 (0:00:00.232) 0:00:02.084 ************ 2025-05-19 15:05:43.105586 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.105597 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:43.105608 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:43.105618 | orchestrator | 2025-05-19 15:05:43.105629 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-19 15:05:43.105640 | orchestrator | Monday 19 May 2025 15:05:31 +0000 (0:00:00.292) 0:00:02.376 ************ 2025-05-19 15:05:43.105651 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:43.105662 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.105673 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:43.105683 | orchestrator | 2025-05-19 15:05:43.105694 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-19 15:05:43.105705 | orchestrator | Monday 19 May 2025 15:05:32 +0000 (0:00:01.007) 0:00:03.383 ************ 2025-05-19 15:05:43.105716 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.105727 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:05:43.105738 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:05:43.105749 | orchestrator | 2025-05-19 15:05:43.105760 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-19 15:05:43.105773 | orchestrator | Monday 19 May 2025 15:05:32 +0000 (0:00:00.269) 0:00:03.653 ************ 2025-05-19 15:05:43.105786 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.105798 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:43.105810 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:43.105823 | orchestrator | 2025-05-19 15:05:43.105836 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:05:43.105849 | orchestrator | Monday 19 May 2025 15:05:32 +0000 (0:00:00.446) 0:00:04.099 ************ 2025-05-19 15:05:43.105862 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.105874 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:43.105886 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:43.105898 | orchestrator | 2025-05-19 15:05:43.105911 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-19 15:05:43.105923 | orchestrator | Monday 19 May 2025 15:05:33 +0000 (0:00:00.280) 0:00:04.380 ************ 2025-05-19 15:05:43.105955 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.105968 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:05:43.105980 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:05:43.105994 | orchestrator | 2025-05-19 15:05:43.106006 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-19 15:05:43.106068 | orchestrator | Monday 19 May 2025 15:05:33 +0000 (0:00:00.261) 0:00:04.641 ************ 2025-05-19 15:05:43.106082 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106094 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:05:43.106108 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:05:43.106120 | orchestrator | 2025-05-19 15:05:43.106130 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:05:43.106141 | orchestrator | Monday 19 May 2025 15:05:33 +0000 (0:00:00.291) 0:00:04.933 ************ 2025-05-19 15:05:43.106169 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106180 | orchestrator | 2025-05-19 15:05:43.106191 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:05:43.106202 | orchestrator | Monday 19 May 2025 15:05:34 +0000 (0:00:00.591) 0:00:05.525 ************ 2025-05-19 15:05:43.106212 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106223 | orchestrator | 2025-05-19 15:05:43.106233 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:05:43.106244 | orchestrator | Monday 19 May 2025 15:05:34 +0000 (0:00:00.240) 0:00:05.766 ************ 2025-05-19 15:05:43.106255 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106265 | orchestrator | 2025-05-19 15:05:43.106276 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:43.106287 | orchestrator | Monday 19 May 2025 15:05:34 +0000 (0:00:00.241) 0:00:06.007 ************ 2025-05-19 15:05:43.106298 | orchestrator | 2025-05-19 15:05:43.106308 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:43.106319 | orchestrator | Monday 19 May 2025 15:05:34 +0000 (0:00:00.067) 0:00:06.074 ************ 2025-05-19 15:05:43.106329 | orchestrator | 2025-05-19 15:05:43.106340 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:43.106351 | orchestrator | Monday 19 May 2025 15:05:34 +0000 (0:00:00.067) 0:00:06.141 ************ 2025-05-19 15:05:43.106361 | orchestrator | 2025-05-19 15:05:43.106372 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:05:43.106382 | orchestrator | Monday 19 May 2025 15:05:35 +0000 (0:00:00.072) 0:00:06.214 ************ 2025-05-19 15:05:43.106393 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106403 | orchestrator | 2025-05-19 15:05:43.106414 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-19 15:05:43.106425 | orchestrator | Monday 19 May 2025 15:05:35 +0000 (0:00:00.245) 0:00:06.459 ************ 2025-05-19 15:05:43.106435 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106446 | orchestrator | 2025-05-19 15:05:43.106472 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-19 15:05:43.106484 | orchestrator | Monday 19 May 2025 15:05:35 +0000 (0:00:00.229) 0:00:06.689 ************ 2025-05-19 15:05:43.106495 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106505 | orchestrator | 2025-05-19 15:05:43.106516 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-19 15:05:43.106527 | orchestrator | Monday 19 May 2025 15:05:35 +0000 (0:00:00.112) 0:00:06.802 ************ 2025-05-19 15:05:43.106538 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:05:43.106548 | orchestrator | 2025-05-19 15:05:43.106559 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-19 15:05:43.106570 | orchestrator | Monday 19 May 2025 15:05:37 +0000 (0:00:01.542) 0:00:08.345 ************ 2025-05-19 15:05:43.106580 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106591 | orchestrator | 2025-05-19 15:05:43.106602 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-19 15:05:43.106621 | orchestrator | Monday 19 May 2025 15:05:37 +0000 (0:00:00.259) 0:00:08.604 ************ 2025-05-19 15:05:43.106632 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106643 | orchestrator | 2025-05-19 15:05:43.106653 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-19 15:05:43.106669 | orchestrator | Monday 19 May 2025 15:05:37 +0000 (0:00:00.294) 0:00:08.899 ************ 2025-05-19 15:05:43.106680 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106691 | orchestrator | 2025-05-19 15:05:43.106701 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-19 15:05:43.106712 | orchestrator | Monday 19 May 2025 15:05:37 +0000 (0:00:00.228) 0:00:09.127 ************ 2025-05-19 15:05:43.106723 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106733 | orchestrator | 2025-05-19 15:05:43.106744 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-19 15:05:43.106754 | orchestrator | Monday 19 May 2025 15:05:38 +0000 (0:00:00.238) 0:00:09.365 ************ 2025-05-19 15:05:43.106765 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106776 | orchestrator | 2025-05-19 15:05:43.106786 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-19 15:05:43.106797 | orchestrator | Monday 19 May 2025 15:05:38 +0000 (0:00:00.116) 0:00:09.482 ************ 2025-05-19 15:05:43.106808 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106818 | orchestrator | 2025-05-19 15:05:43.106829 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-19 15:05:43.106839 | orchestrator | Monday 19 May 2025 15:05:38 +0000 (0:00:00.136) 0:00:09.619 ************ 2025-05-19 15:05:43.106850 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106860 | orchestrator | 2025-05-19 15:05:43.106871 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-19 15:05:43.106882 | orchestrator | Monday 19 May 2025 15:05:38 +0000 (0:00:00.103) 0:00:09.723 ************ 2025-05-19 15:05:43.106892 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:05:43.106903 | orchestrator | 2025-05-19 15:05:43.106914 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-19 15:05:43.106924 | orchestrator | Monday 19 May 2025 15:05:39 +0000 (0:00:01.290) 0:00:11.013 ************ 2025-05-19 15:05:43.106935 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.106945 | orchestrator | 2025-05-19 15:05:43.106956 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-19 15:05:43.106967 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.198) 0:00:11.212 ************ 2025-05-19 15:05:43.106977 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.106988 | orchestrator | 2025-05-19 15:05:43.106998 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-19 15:05:43.107009 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.121) 0:00:11.334 ************ 2025-05-19 15:05:43.107019 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:05:43.107030 | orchestrator | 2025-05-19 15:05:43.107041 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-19 15:05:43.107051 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.137) 0:00:11.471 ************ 2025-05-19 15:05:43.107062 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.107075 | orchestrator | 2025-05-19 15:05:43.107096 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-19 15:05:43.107116 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.111) 0:00:11.583 ************ 2025-05-19 15:05:43.107137 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.107196 | orchestrator | 2025-05-19 15:05:43.107210 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 15:05:43.107220 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.247) 0:00:11.830 ************ 2025-05-19 15:05:43.107231 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.107242 | orchestrator | 2025-05-19 15:05:43.107252 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 15:05:43.107270 | orchestrator | Monday 19 May 2025 15:05:40 +0000 (0:00:00.213) 0:00:12.044 ************ 2025-05-19 15:05:43.107281 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:05:43.107291 | orchestrator | 2025-05-19 15:05:43.107302 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:05:43.107313 | orchestrator | Monday 19 May 2025 15:05:41 +0000 (0:00:00.210) 0:00:12.254 ************ 2025-05-19 15:05:43.107323 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.107334 | orchestrator | 2025-05-19 15:05:43.107345 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:05:43.107422 | orchestrator | Monday 19 May 2025 15:05:42 +0000 (0:00:01.421) 0:00:13.675 ************ 2025-05-19 15:05:43.107437 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.107447 | orchestrator | 2025-05-19 15:05:43.107458 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:05:43.107469 | orchestrator | Monday 19 May 2025 15:05:42 +0000 (0:00:00.227) 0:00:13.903 ************ 2025-05-19 15:05:43.107479 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:43.107490 | orchestrator | 2025-05-19 15:05:43.107510 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:44.914882 | orchestrator | Monday 19 May 2025 15:05:42 +0000 (0:00:00.199) 0:00:14.102 ************ 2025-05-19 15:05:44.914975 | orchestrator | 2025-05-19 15:05:44.914992 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:44.915004 | orchestrator | Monday 19 May 2025 15:05:42 +0000 (0:00:00.062) 0:00:14.164 ************ 2025-05-19 15:05:44.915015 | orchestrator | 2025-05-19 15:05:44.915026 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:05:44.915037 | orchestrator | Monday 19 May 2025 15:05:43 +0000 (0:00:00.061) 0:00:14.226 ************ 2025-05-19 15:05:44.915048 | orchestrator | 2025-05-19 15:05:44.915059 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 15:05:44.915069 | orchestrator | Monday 19 May 2025 15:05:43 +0000 (0:00:00.065) 0:00:14.292 ************ 2025-05-19 15:05:44.915080 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:05:44.915091 | orchestrator | 2025-05-19 15:05:44.915102 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:05:44.915125 | orchestrator | Monday 19 May 2025 15:05:44 +0000 (0:00:01.210) 0:00:15.502 ************ 2025-05-19 15:05:44.915137 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-19 15:05:44.915148 | orchestrator |  "msg": [ 2025-05-19 15:05:44.915205 | orchestrator |  "Validator run completed.", 2025-05-19 15:05:44.915218 | orchestrator |  "You can find the report file here:", 2025-05-19 15:05:44.915229 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-19T15:05:29+00:00-report.json", 2025-05-19 15:05:44.915240 | orchestrator |  "on the following host:", 2025-05-19 15:05:44.915251 | orchestrator |  "testbed-manager" 2025-05-19 15:05:44.915262 | orchestrator |  ] 2025-05-19 15:05:44.915273 | orchestrator | } 2025-05-19 15:05:44.915284 | orchestrator | 2025-05-19 15:05:44.915295 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:05:44.915306 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-19 15:05:44.915318 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:44.915329 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:05:44.915340 | orchestrator | 2025-05-19 15:05:44.915351 | orchestrator | 2025-05-19 15:05:44.915361 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:05:44.915394 | orchestrator | Monday 19 May 2025 15:05:44 +0000 (0:00:00.433) 0:00:15.935 ************ 2025-05-19 15:05:44.915405 | orchestrator | =============================================================================== 2025-05-19 15:05:44.915416 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.54s 2025-05-19 15:05:44.915427 | orchestrator | Aggregate test results step one ----------------------------------------- 1.42s 2025-05-19 15:05:44.915452 | orchestrator | Gather status data ------------------------------------------------------ 1.29s 2025-05-19 15:05:44.915466 | orchestrator | Write report file ------------------------------------------------------- 1.21s 2025-05-19 15:05:44.915479 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-05-19 15:05:44.915491 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2025-05-19 15:05:44.915504 | orchestrator | Get timestamp for report file ------------------------------------------- 0.60s 2025-05-19 15:05:44.915517 | orchestrator | Aggregate test results step one ----------------------------------------- 0.59s 2025-05-19 15:05:44.915529 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-05-19 15:05:44.915542 | orchestrator | Print report file information ------------------------------------------- 0.43s 2025-05-19 15:05:44.915554 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.29s 2025-05-19 15:05:44.915567 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-05-19 15:05:44.915579 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-05-19 15:05:44.915591 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-05-19 15:05:44.915603 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-05-19 15:05:44.915615 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.26s 2025-05-19 15:05:44.915628 | orchestrator | Set quorum test data ---------------------------------------------------- 0.26s 2025-05-19 15:05:44.915641 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.25s 2025-05-19 15:05:44.915654 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-05-19 15:05:44.915666 | orchestrator | Aggregate test results step three --------------------------------------- 0.24s 2025-05-19 15:05:45.065391 | orchestrator | + osism validate ceph-mgrs 2025-05-19 15:06:04.397815 | orchestrator | 2025-05-19 15:06:04.398001 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-19 15:06:04.398112 | orchestrator | 2025-05-19 15:06:04.398128 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 15:06:04.398140 | orchestrator | Monday 19 May 2025 15:05:50 +0000 (0:00:00.444) 0:00:00.444 ************ 2025-05-19 15:06:04.398151 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.398163 | orchestrator | 2025-05-19 15:06:04.398211 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 15:06:04.398225 | orchestrator | Monday 19 May 2025 15:05:51 +0000 (0:00:00.598) 0:00:01.042 ************ 2025-05-19 15:06:04.398237 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.398247 | orchestrator | 2025-05-19 15:06:04.398258 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 15:06:04.398269 | orchestrator | Monday 19 May 2025 15:05:52 +0000 (0:00:00.788) 0:00:01.831 ************ 2025-05-19 15:06:04.398280 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398291 | orchestrator | 2025-05-19 15:06:04.398302 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-19 15:06:04.398312 | orchestrator | Monday 19 May 2025 15:05:52 +0000 (0:00:00.213) 0:00:02.044 ************ 2025-05-19 15:06:04.398324 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398335 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:06:04.398346 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:06:04.398357 | orchestrator | 2025-05-19 15:06:04.398368 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-19 15:06:04.398417 | orchestrator | Monday 19 May 2025 15:05:52 +0000 (0:00:00.277) 0:00:02.322 ************ 2025-05-19 15:06:04.398428 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398439 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:06:04.398450 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:06:04.398460 | orchestrator | 2025-05-19 15:06:04.398486 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-19 15:06:04.398497 | orchestrator | Monday 19 May 2025 15:05:53 +0000 (0:00:00.947) 0:00:03.270 ************ 2025-05-19 15:06:04.398508 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.398519 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:06:04.398530 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:06:04.398541 | orchestrator | 2025-05-19 15:06:04.398551 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-19 15:06:04.398562 | orchestrator | Monday 19 May 2025 15:05:53 +0000 (0:00:00.276) 0:00:03.546 ************ 2025-05-19 15:06:04.398573 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398584 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:06:04.398594 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:06:04.398605 | orchestrator | 2025-05-19 15:06:04.398615 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:04.398626 | orchestrator | Monday 19 May 2025 15:05:54 +0000 (0:00:00.470) 0:00:04.017 ************ 2025-05-19 15:06:04.398636 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398647 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:06:04.398657 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:06:04.398668 | orchestrator | 2025-05-19 15:06:04.398679 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-19 15:06:04.398689 | orchestrator | Monday 19 May 2025 15:05:54 +0000 (0:00:00.293) 0:00:04.311 ************ 2025-05-19 15:06:04.398700 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.398711 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:06:04.398722 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:06:04.398732 | orchestrator | 2025-05-19 15:06:04.398743 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-19 15:06:04.398754 | orchestrator | Monday 19 May 2025 15:05:54 +0000 (0:00:00.265) 0:00:04.576 ************ 2025-05-19 15:06:04.398764 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.398775 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:06:04.398786 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:06:04.398796 | orchestrator | 2025-05-19 15:06:04.398807 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:06:04.398818 | orchestrator | Monday 19 May 2025 15:05:55 +0000 (0:00:00.308) 0:00:04.885 ************ 2025-05-19 15:06:04.398828 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.398839 | orchestrator | 2025-05-19 15:06:04.398849 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:06:04.398860 | orchestrator | Monday 19 May 2025 15:05:55 +0000 (0:00:00.623) 0:00:05.508 ************ 2025-05-19 15:06:04.398870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.398881 | orchestrator | 2025-05-19 15:06:04.398892 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:06:04.398902 | orchestrator | Monday 19 May 2025 15:05:55 +0000 (0:00:00.242) 0:00:05.751 ************ 2025-05-19 15:06:04.398913 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.398923 | orchestrator | 2025-05-19 15:06:04.398935 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.398945 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.232) 0:00:05.983 ************ 2025-05-19 15:06:04.398956 | orchestrator | 2025-05-19 15:06:04.398967 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.398977 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.067) 0:00:06.050 ************ 2025-05-19 15:06:04.398988 | orchestrator | 2025-05-19 15:06:04.398999 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.399017 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.068) 0:00:06.118 ************ 2025-05-19 15:06:04.399028 | orchestrator | 2025-05-19 15:06:04.399038 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:06:04.399049 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.070) 0:00:06.189 ************ 2025-05-19 15:06:04.399059 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.399070 | orchestrator | 2025-05-19 15:06:04.399081 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-19 15:06:04.399091 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.229) 0:00:06.418 ************ 2025-05-19 15:06:04.399102 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.399113 | orchestrator | 2025-05-19 15:06:04.399142 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-19 15:06:04.399154 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.234) 0:00:06.653 ************ 2025-05-19 15:06:04.399165 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.399206 | orchestrator | 2025-05-19 15:06:04.399226 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-19 15:06:04.399245 | orchestrator | Monday 19 May 2025 15:05:56 +0000 (0:00:00.107) 0:00:06.760 ************ 2025-05-19 15:06:04.399263 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:06:04.399281 | orchestrator | 2025-05-19 15:06:04.399293 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-19 15:06:04.399303 | orchestrator | Monday 19 May 2025 15:05:58 +0000 (0:00:01.849) 0:00:08.610 ************ 2025-05-19 15:06:04.399314 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.399324 | orchestrator | 2025-05-19 15:06:04.399335 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-19 15:06:04.399345 | orchestrator | Monday 19 May 2025 15:05:59 +0000 (0:00:00.238) 0:00:08.848 ************ 2025-05-19 15:06:04.399355 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.399366 | orchestrator | 2025-05-19 15:06:04.399377 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-19 15:06:04.399388 | orchestrator | Monday 19 May 2025 15:05:59 +0000 (0:00:00.634) 0:00:09.483 ************ 2025-05-19 15:06:04.399398 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.399409 | orchestrator | 2025-05-19 15:06:04.399419 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-19 15:06:04.399430 | orchestrator | Monday 19 May 2025 15:05:59 +0000 (0:00:00.119) 0:00:09.602 ************ 2025-05-19 15:06:04.399440 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:06:04.399451 | orchestrator | 2025-05-19 15:06:04.399462 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 15:06:04.399478 | orchestrator | Monday 19 May 2025 15:05:59 +0000 (0:00:00.134) 0:00:09.736 ************ 2025-05-19 15:06:04.399489 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.399500 | orchestrator | 2025-05-19 15:06:04.399510 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 15:06:04.399521 | orchestrator | Monday 19 May 2025 15:06:00 +0000 (0:00:00.237) 0:00:09.974 ************ 2025-05-19 15:06:04.399531 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:06:04.399542 | orchestrator | 2025-05-19 15:06:04.399552 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:06:04.399563 | orchestrator | Monday 19 May 2025 15:06:00 +0000 (0:00:00.222) 0:00:10.197 ************ 2025-05-19 15:06:04.399573 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.399584 | orchestrator | 2025-05-19 15:06:04.399594 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:06:04.399605 | orchestrator | Monday 19 May 2025 15:06:01 +0000 (0:00:01.254) 0:00:11.451 ************ 2025-05-19 15:06:04.399615 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.399626 | orchestrator | 2025-05-19 15:06:04.399636 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:06:04.399654 | orchestrator | Monday 19 May 2025 15:06:01 +0000 (0:00:00.260) 0:00:11.712 ************ 2025-05-19 15:06:04.399665 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.399676 | orchestrator | 2025-05-19 15:06:04.399686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.399697 | orchestrator | Monday 19 May 2025 15:06:02 +0000 (0:00:00.264) 0:00:11.976 ************ 2025-05-19 15:06:04.399707 | orchestrator | 2025-05-19 15:06:04.399718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.399728 | orchestrator | Monday 19 May 2025 15:06:02 +0000 (0:00:00.075) 0:00:12.052 ************ 2025-05-19 15:06:04.399739 | orchestrator | 2025-05-19 15:06:04.399750 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:04.399760 | orchestrator | Monday 19 May 2025 15:06:02 +0000 (0:00:00.066) 0:00:12.118 ************ 2025-05-19 15:06:04.399770 | orchestrator | 2025-05-19 15:06:04.399781 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 15:06:04.399791 | orchestrator | Monday 19 May 2025 15:06:02 +0000 (0:00:00.070) 0:00:12.188 ************ 2025-05-19 15:06:04.399802 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:04.399812 | orchestrator | 2025-05-19 15:06:04.399823 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:06:04.399833 | orchestrator | Monday 19 May 2025 15:06:03 +0000 (0:00:01.603) 0:00:13.792 ************ 2025-05-19 15:06:04.399843 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-19 15:06:04.399854 | orchestrator |  "msg": [ 2025-05-19 15:06:04.399865 | orchestrator |  "Validator run completed.", 2025-05-19 15:06:04.399876 | orchestrator |  "You can find the report file here:", 2025-05-19 15:06:04.399887 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-19T15:05:51+00:00-report.json", 2025-05-19 15:06:04.399899 | orchestrator |  "on the following host:", 2025-05-19 15:06:04.399910 | orchestrator |  "testbed-manager" 2025-05-19 15:06:04.399921 | orchestrator |  ] 2025-05-19 15:06:04.399932 | orchestrator | } 2025-05-19 15:06:04.399943 | orchestrator | 2025-05-19 15:06:04.399953 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:06:04.399965 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 15:06:04.399977 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:06:04.399996 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:06:04.667883 | orchestrator | 2025-05-19 15:06:04.668019 | orchestrator | 2025-05-19 15:06:04.668034 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:06:04.668048 | orchestrator | Monday 19 May 2025 15:06:04 +0000 (0:00:00.380) 0:00:14.172 ************ 2025-05-19 15:06:04.668059 | orchestrator | =============================================================================== 2025-05-19 15:06:04.668070 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.85s 2025-05-19 15:06:04.668081 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2025-05-19 15:06:04.668092 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2025-05-19 15:06:04.668102 | orchestrator | Get container info ------------------------------------------------------ 0.95s 2025-05-19 15:06:04.668113 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2025-05-19 15:06:04.668123 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.63s 2025-05-19 15:06:04.668134 | orchestrator | Aggregate test results step one ----------------------------------------- 0.62s 2025-05-19 15:06:04.668166 | orchestrator | Get timestamp for report file ------------------------------------------- 0.60s 2025-05-19 15:06:04.668201 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2025-05-19 15:06:04.668213 | orchestrator | Print report file information ------------------------------------------- 0.38s 2025-05-19 15:06:04.668223 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-05-19 15:06:04.668233 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-05-19 15:06:04.668245 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-05-19 15:06:04.668255 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-05-19 15:06:04.668266 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.27s 2025-05-19 15:06:04.668295 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-05-19 15:06:04.668306 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-05-19 15:06:04.668317 | orchestrator | Aggregate test results step two ----------------------------------------- 0.24s 2025-05-19 15:06:04.668328 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.24s 2025-05-19 15:06:04.668339 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.24s 2025-05-19 15:06:04.885302 | orchestrator | + osism validate ceph-osds 2025-05-19 15:06:14.699261 | orchestrator | 2025-05-19 15:06:14.699373 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-19 15:06:14.699391 | orchestrator | 2025-05-19 15:06:14.699403 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 15:06:14.699415 | orchestrator | Monday 19 May 2025 15:06:10 +0000 (0:00:00.419) 0:00:00.419 ************ 2025-05-19 15:06:14.699427 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:14.699438 | orchestrator | 2025-05-19 15:06:14.699449 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 15:06:14.699460 | orchestrator | Monday 19 May 2025 15:06:11 +0000 (0:00:00.591) 0:00:01.010 ************ 2025-05-19 15:06:14.699470 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:14.699481 | orchestrator | 2025-05-19 15:06:14.699492 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 15:06:14.699502 | orchestrator | Monday 19 May 2025 15:06:11 +0000 (0:00:00.365) 0:00:01.376 ************ 2025-05-19 15:06:14.699513 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:14.699524 | orchestrator | 2025-05-19 15:06:14.699535 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 15:06:14.699545 | orchestrator | Monday 19 May 2025 15:06:12 +0000 (0:00:00.862) 0:00:02.239 ************ 2025-05-19 15:06:14.699556 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:14.699568 | orchestrator | 2025-05-19 15:06:14.699579 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-19 15:06:14.699589 | orchestrator | Monday 19 May 2025 15:06:12 +0000 (0:00:00.114) 0:00:02.353 ************ 2025-05-19 15:06:14.699600 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:14.699611 | orchestrator | 2025-05-19 15:06:14.699622 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-19 15:06:14.699633 | orchestrator | Monday 19 May 2025 15:06:12 +0000 (0:00:00.131) 0:00:02.485 ************ 2025-05-19 15:06:14.699643 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:14.699654 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:14.699665 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:14.699676 | orchestrator | 2025-05-19 15:06:14.699686 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-19 15:06:14.699697 | orchestrator | Monday 19 May 2025 15:06:13 +0000 (0:00:00.315) 0:00:02.800 ************ 2025-05-19 15:06:14.699708 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:14.699739 | orchestrator | 2025-05-19 15:06:14.699751 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-19 15:06:14.699763 | orchestrator | Monday 19 May 2025 15:06:13 +0000 (0:00:00.122) 0:00:02.922 ************ 2025-05-19 15:06:14.699776 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:14.699788 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:14.699800 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:14.699812 | orchestrator | 2025-05-19 15:06:14.699824 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-19 15:06:14.699836 | orchestrator | Monday 19 May 2025 15:06:13 +0000 (0:00:00.307) 0:00:03.230 ************ 2025-05-19 15:06:14.699848 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:14.699860 | orchestrator | 2025-05-19 15:06:14.699873 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:14.699886 | orchestrator | Monday 19 May 2025 15:06:14 +0000 (0:00:00.513) 0:00:03.744 ************ 2025-05-19 15:06:14.699898 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:14.699910 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:14.699922 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:14.699935 | orchestrator | 2025-05-19 15:06:14.699947 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-19 15:06:14.699959 | orchestrator | Monday 19 May 2025 15:06:14 +0000 (0:00:00.443) 0:00:04.188 ************ 2025-05-19 15:06:14.699975 | orchestrator | skipping: [testbed-node-3] => (item={'id': '27cf72fc0ee88ff860aa2ef71eaa7168dc0082383d31ec9e0d9b9954638c50ab', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.699990 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fdbbd6b099f1c9776ff73511f8b8b41a859afce852811fd8057ceb4ab39cbe50', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.700004 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6db6e30bce6431c544284d9b435ab74f69d0f08c3c7fd7018fbd1e6909ea615d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.700039 | orchestrator | skipping: [testbed-node-3] => (item={'id': '44d52fe2f72b384414d60421c8c9b0f6e95a75f31d0e5de30182dee55f475797', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.700069 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9665ecff58e7ed6693202f517a9bceb3be2b8caa602a5e99f1b48443e76d0fdd', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.700113 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0851d66be7f31aa4f395dea67c63c2426633bfe4e0b5225997ad1c4d30496d98', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.700127 | orchestrator | skipping: [testbed-node-3] => (item={'id': '165db8179c2442584f15f3511f9a2355939285655a71729fee9e8b00bda0b7a7', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-19 15:06:14.700138 | orchestrator | skipping: [testbed-node-3] => (item={'id': '26ff22af5b4b71cc8a2442753cabc5d579a320cf08176223be6f6416247191ed', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.700149 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e484548d177b2f416f78e76344aff720133e6f6672cb6a48b1bb2356b8a86b63', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.700173 | orchestrator | skipping: [testbed-node-3] => (item={'id': '214d04a8bb718b7c8da3f784aa03921a8247e75c6d003e0aad7fdebe86b037ab', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-19 15:06:14.700211 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78ef5becb0a0ff85e01f27b8fc9b82e846653c5852d723d65e3ab68cde1721c7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 15:06:14.700231 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0a48b92b2524242b333f698ff93551f78ea8cb3b74289a84e0b3ec7f35cf007d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 15:06:14.700250 | orchestrator | ok: [testbed-node-3] => (item={'id': '5691579a0a222f7538d00bdb10a15ae53664e463e399666f6ccd478e4ea5c09b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:14.700271 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd78e93a7cc769bb1baa5fbef880d499b6e36bbf0b9c4f74d92675b0495e58574', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:14.700291 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1cea2456fe291a95acd6e6e76b7e9e43bebddf39c24b337fbd0039a1231926da', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-19 15:06:14.700312 | orchestrator | skipping: [testbed-node-3] => (item={'id': '08056a0935d640e4a5cf3381b07bb033fdfd0d6c74737787b2afc35c1c290053', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:14.700331 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8323422f0ce728a70df88ae1122bf07584da3cffc537ed795656e1c6284d9544', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:14.700352 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f2adbf80626baa0aaf124555b95a454f30004475710e3c716d12fc5314d07f56', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:14.700366 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54cc5a85b169fda910f645e95acf0ef8713b9c0a00db206bad8df04d967e29d7', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:14.700384 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb4d19551110d450320beb17529ae7139359d557f7d9f38ed618050faee002c7', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-19 15:06:14.700396 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83f70080c7a7832c6f6cf1bc696ec80e855a5f714913ef32a4dfb0b4302a5dab', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.700415 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec840a93577401b637492ec23d9a753b3adcac4f1d76ff0a219882f293f27416', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.944870 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e28fca840e5bd5fc68e23556308894c9999390fd523ec9b09d54211968d03dd', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.945005 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd0cf019ac2443184c16d09a45ce8d220ec71966f1a5115b2c33f00e81e299ce', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.945044 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9b60aba6bb90725ec53c2007130e359a20e291f8a3ca44a7f5f5cbbc5c832a6', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.945059 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6bb2988841a0f72ee73ca50f9dbd27aa2b3eb5c6b0df977fd29f0e35f80777b', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.945070 | orchestrator | skipping: [testbed-node-4] => (item={'id': '730395559f0a5b27120c980674fc34d8107f71af1dce4e8fdfdfcd6f98a50d72', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-19 15:06:14.945082 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd331ef3c56c348706706832a1178b9db9b6b5eefd8149ed2ef23e582a7fecf25', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.945093 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a3b85837e722700922dc5f3eda792204af9bb3d0e9243cac781070faa84d3c60', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.945105 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e600e02db0f68c941782306e816d0547768f3f98f1a18106fe6769b9d0d72af4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-19 15:06:14.945116 | orchestrator | skipping: [testbed-node-4] => (item={'id': '769748d00a5979ce3b180c2f6272a2d6bb6548b49bbf6d97a9a9ccc59d554fe1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 15:06:14.945127 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3571c842621889dcebc9195158885ad1ed9c4a214033ac260cc4dca14858fc90', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 15:06:14.945139 | orchestrator | ok: [testbed-node-4] => (item={'id': 'eae251aea8392f79134fe7d87be3165465f62c6a5fe0130163758a698f89cb00', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:14.945151 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c79aa5a584c114b9152e0c97405e7357bfb69ae3c58c0bfd0ee3e34797e2854f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:14.945163 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d5be53cfdf607f91879b70dfc1abaffc03266ccc5009612c50ec6c80a258640', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-19 15:06:14.945174 | orchestrator | skipping: [testbed-node-4] => (item={'id': '338509603b2cb48ebc10b6e7680516fbb38f215b7cc5496ee6d1f6fc4a6bc1c3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:14.945238 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e391940279877778e12196dfe47e6cf8149d732b5b050a32ef3fc495db6e317e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:14.945277 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2211745c8ddd8061c0f7fc640507818b51505dc0b4de45a4cd652777a5dfae06', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:14.945289 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8a01d66f80ea30842f2b38bfd238b61abf4756f00d83a3f1e04eec28ed1195b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:14.945301 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6f84378f300b38130784178250905a35fd1526282cfb54db9f9b36833a5a29a1', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-19 15:06:14.945312 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f184edc56216a5fbd8a4d2982511d89a3d8db2d22e297bbbd759c6ae444e9735', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.945323 | orchestrator | skipping: [testbed-node-5] => (item={'id': '80d880be823a0416a1ec54cbd3392d046ad4e2c0a38f32b41d0ac11d60c4732f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-19 15:06:14.945334 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1b1b7f0a4de37e2c99420720bcbeb1128b0daec01ab23cba1d1dbca08e488249', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.945346 | orchestrator | skipping: [testbed-node-5] => (item={'id': '995477a5c1a9052e522690bbb2c8ce190c6ef13833263428607eee8157f9592b', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 15:06:14.945357 | orchestrator | skipping: [testbed-node-5] => (item={'id': '915e4fa79707c37910e1aa6b3d1f2e3a34949b54a17a9598a80f7fa0367a79a7', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.945368 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac00c080aba6bf974620dc5e51bc42603fe139eba5fcee59e9b7e0c94b9081aa', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-19 15:06:14.945379 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7166b6d99ceb2503c063eb770f1c482c07ea8c9dbcfdfbca1cbc5fd8515b7849', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-19 15:06:14.945389 | orchestrator | skipping: [testbed-node-5] => (item={'id': '860c9fff546b6990c77565ed8c2688258073e595c4b23b172a4375aa4284066d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.945400 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9962a66dbc88ed973c3e08f31316060829876ca730d8a79c7c8a1d91b40dc41a', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 15:06:14.945412 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4b3f6d2ac0c6ad0a9e4b5f86e539d48baa540cee754d85e0a43602d882c49e9c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-19 15:06:14.945431 | orchestrator | skipping: [testbed-node-5] => (item={'id': '48e1ad2896befef18a77e86e4b9f2f553a5ac2ea64d9801901a1dac2ebf85b7a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 15:06:14.945530 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e64e5cd1740754d26d6c6b7fd2108bdc0bba5507f5ef91c611c9013b86de3b3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 15:06:14.945556 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ccbd712f4827e1f9943876da860558657bb0ba0a8d4d0b57f8e74634e79e8655', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:22.808796 | orchestrator | ok: [testbed-node-5] => (item={'id': '5f604fb3b7e4a3e8f9d4d1b26924ec4a2f27d2a72c88f40deeffb19a0c8076d5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-19 15:06:22.808917 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ad63fdf3c4e84fd1827a29939a4f8a09ba7d2156575bf28a17cfeda55725eba', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-19 15:06:22.808934 | orchestrator | skipping: [testbed-node-5] => (item={'id': '686ffcc855fa6812f04a2bdb9ffbe2ed3f12c36c313c849f4755f54d16b9c9fb', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:22.808948 | orchestrator | skipping: [testbed-node-5] => (item={'id': '692182c37a71dbbd3beaa7da436b93603ee8f11107c80e5c62598cf9a060580e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-05-19 15:06:22.808960 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df8f877375d17db93dfbe283a74b3665ab83984046be712e4a0a884c45b46a45', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:22.808971 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a3ca40ea7e8528414d5e3b0f2cc490a2dec67c0493351aa2c8269b847e450fdf', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-19 15:06:22.808982 | orchestrator | skipping: [testbed-node-5] => (item={'id': '92377e3e36765a45c2529cf1269468b71246ee4849fec0e5cda9f31fe116994c', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-19 15:06:22.808993 | orchestrator | 2025-05-19 15:06:22.809006 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-19 15:06:22.809017 | orchestrator | Monday 19 May 2025 15:06:14 +0000 (0:00:00.483) 0:00:04.672 ************ 2025-05-19 15:06:22.809028 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.809039 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.809050 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.809060 | orchestrator | 2025-05-19 15:06:22.809071 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-19 15:06:22.809082 | orchestrator | Monday 19 May 2025 15:06:15 +0000 (0:00:00.284) 0:00:04.956 ************ 2025-05-19 15:06:22.809093 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809104 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:22.809115 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:22.809125 | orchestrator | 2025-05-19 15:06:22.809136 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-19 15:06:22.809147 | orchestrator | Monday 19 May 2025 15:06:15 +0000 (0:00:00.424) 0:00:05.380 ************ 2025-05-19 15:06:22.809157 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.809168 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.809178 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.809189 | orchestrator | 2025-05-19 15:06:22.809241 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:22.809252 | orchestrator | Monday 19 May 2025 15:06:15 +0000 (0:00:00.306) 0:00:05.687 ************ 2025-05-19 15:06:22.809286 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.809297 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.809308 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.809319 | orchestrator | 2025-05-19 15:06:22.809332 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-19 15:06:22.809345 | orchestrator | Monday 19 May 2025 15:06:16 +0000 (0:00:00.267) 0:00:05.955 ************ 2025-05-19 15:06:22.809357 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-19 15:06:22.809372 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-19 15:06:22.809384 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-19 15:06:22.809426 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-19 15:06:22.809438 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:22.809451 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-19 15:06:22.809464 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-19 15:06:22.809476 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:22.809488 | orchestrator | 2025-05-19 15:06:22.809500 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-19 15:06:22.809512 | orchestrator | Monday 19 May 2025 15:06:16 +0000 (0:00:00.287) 0:00:06.242 ************ 2025-05-19 15:06:22.809525 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.809537 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.809550 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.809562 | orchestrator | 2025-05-19 15:06:22.809593 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-19 15:06:22.809606 | orchestrator | Monday 19 May 2025 15:06:16 +0000 (0:00:00.445) 0:00:06.688 ************ 2025-05-19 15:06:22.809619 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809631 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:22.809643 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:22.809656 | orchestrator | 2025-05-19 15:06:22.809669 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-19 15:06:22.809680 | orchestrator | Monday 19 May 2025 15:06:17 +0000 (0:00:00.294) 0:00:06.983 ************ 2025-05-19 15:06:22.809691 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809702 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:22.809712 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:22.809723 | orchestrator | 2025-05-19 15:06:22.809733 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-19 15:06:22.809744 | orchestrator | Monday 19 May 2025 15:06:17 +0000 (0:00:00.288) 0:00:07.271 ************ 2025-05-19 15:06:22.809755 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.809766 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.809776 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.809787 | orchestrator | 2025-05-19 15:06:22.809798 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:06:22.809809 | orchestrator | Monday 19 May 2025 15:06:17 +0000 (0:00:00.284) 0:00:07.556 ************ 2025-05-19 15:06:22.809819 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809830 | orchestrator | 2025-05-19 15:06:22.809840 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:06:22.809851 | orchestrator | Monday 19 May 2025 15:06:18 +0000 (0:00:00.614) 0:00:08.170 ************ 2025-05-19 15:06:22.809862 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809872 | orchestrator | 2025-05-19 15:06:22.809883 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:06:22.809894 | orchestrator | Monday 19 May 2025 15:06:18 +0000 (0:00:00.224) 0:00:08.395 ************ 2025-05-19 15:06:22.809911 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.809922 | orchestrator | 2025-05-19 15:06:22.809932 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:22.809943 | orchestrator | Monday 19 May 2025 15:06:18 +0000 (0:00:00.229) 0:00:08.624 ************ 2025-05-19 15:06:22.809954 | orchestrator | 2025-05-19 15:06:22.809964 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:22.809975 | orchestrator | Monday 19 May 2025 15:06:18 +0000 (0:00:00.065) 0:00:08.690 ************ 2025-05-19 15:06:22.809985 | orchestrator | 2025-05-19 15:06:22.809996 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:22.810007 | orchestrator | Monday 19 May 2025 15:06:19 +0000 (0:00:00.067) 0:00:08.757 ************ 2025-05-19 15:06:22.810072 | orchestrator | 2025-05-19 15:06:22.810084 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:06:22.810095 | orchestrator | Monday 19 May 2025 15:06:19 +0000 (0:00:00.068) 0:00:08.826 ************ 2025-05-19 15:06:22.810106 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.810116 | orchestrator | 2025-05-19 15:06:22.810127 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-19 15:06:22.810138 | orchestrator | Monday 19 May 2025 15:06:19 +0000 (0:00:00.240) 0:00:09.066 ************ 2025-05-19 15:06:22.810148 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.810159 | orchestrator | 2025-05-19 15:06:22.810170 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:22.810180 | orchestrator | Monday 19 May 2025 15:06:19 +0000 (0:00:00.233) 0:00:09.300 ************ 2025-05-19 15:06:22.810207 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810219 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.810230 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.810240 | orchestrator | 2025-05-19 15:06:22.810251 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-19 15:06:22.810261 | orchestrator | Monday 19 May 2025 15:06:19 +0000 (0:00:00.271) 0:00:09.571 ************ 2025-05-19 15:06:22.810272 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810283 | orchestrator | 2025-05-19 15:06:22.810294 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-19 15:06:22.810305 | orchestrator | Monday 19 May 2025 15:06:20 +0000 (0:00:00.591) 0:00:10.162 ************ 2025-05-19 15:06:22.810315 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 15:06:22.810326 | orchestrator | 2025-05-19 15:06:22.810337 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-19 15:06:22.810348 | orchestrator | Monday 19 May 2025 15:06:21 +0000 (0:00:01.513) 0:00:11.676 ************ 2025-05-19 15:06:22.810358 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810369 | orchestrator | 2025-05-19 15:06:22.810379 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-19 15:06:22.810390 | orchestrator | Monday 19 May 2025 15:06:22 +0000 (0:00:00.152) 0:00:11.828 ************ 2025-05-19 15:06:22.810401 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810411 | orchestrator | 2025-05-19 15:06:22.810422 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-19 15:06:22.810433 | orchestrator | Monday 19 May 2025 15:06:22 +0000 (0:00:00.222) 0:00:12.051 ************ 2025-05-19 15:06:22.810444 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:22.810454 | orchestrator | 2025-05-19 15:06:22.810465 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-19 15:06:22.810476 | orchestrator | Monday 19 May 2025 15:06:22 +0000 (0:00:00.107) 0:00:12.158 ************ 2025-05-19 15:06:22.810486 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810497 | orchestrator | 2025-05-19 15:06:22.810507 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:22.810518 | orchestrator | Monday 19 May 2025 15:06:22 +0000 (0:00:00.120) 0:00:12.279 ************ 2025-05-19 15:06:22.810536 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:22.810547 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:22.810557 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:22.810568 | orchestrator | 2025-05-19 15:06:22.810579 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-19 15:06:22.810597 | orchestrator | Monday 19 May 2025 15:06:22 +0000 (0:00:00.263) 0:00:12.542 ************ 2025-05-19 15:06:34.040535 | orchestrator | changed: [testbed-node-3] 2025-05-19 15:06:34.040649 | orchestrator | changed: [testbed-node-4] 2025-05-19 15:06:34.040663 | orchestrator | changed: [testbed-node-5] 2025-05-19 15:06:34.040675 | orchestrator | 2025-05-19 15:06:34.040688 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-19 15:06:34.040700 | orchestrator | Monday 19 May 2025 15:06:25 +0000 (0:00:02.496) 0:00:15.038 ************ 2025-05-19 15:06:34.040711 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.040723 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.040733 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.040744 | orchestrator | 2025-05-19 15:06:34.040755 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-19 15:06:34.040766 | orchestrator | Monday 19 May 2025 15:06:25 +0000 (0:00:00.301) 0:00:15.339 ************ 2025-05-19 15:06:34.040776 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.040788 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.040798 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.040809 | orchestrator | 2025-05-19 15:06:34.040820 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-19 15:06:34.040830 | orchestrator | Monday 19 May 2025 15:06:25 +0000 (0:00:00.388) 0:00:15.728 ************ 2025-05-19 15:06:34.040841 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:34.040852 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:34.040863 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:34.040873 | orchestrator | 2025-05-19 15:06:34.040884 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-19 15:06:34.040895 | orchestrator | Monday 19 May 2025 15:06:26 +0000 (0:00:00.276) 0:00:16.005 ************ 2025-05-19 15:06:34.040905 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.040916 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.040927 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.040937 | orchestrator | 2025-05-19 15:06:34.040948 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-19 15:06:34.040959 | orchestrator | Monday 19 May 2025 15:06:26 +0000 (0:00:00.463) 0:00:16.468 ************ 2025-05-19 15:06:34.040970 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:34.040980 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:34.040991 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:34.041002 | orchestrator | 2025-05-19 15:06:34.041012 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-19 15:06:34.041073 | orchestrator | Monday 19 May 2025 15:06:27 +0000 (0:00:00.285) 0:00:16.754 ************ 2025-05-19 15:06:34.041088 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:34.041101 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:34.041113 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:34.041126 | orchestrator | 2025-05-19 15:06:34.041138 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 15:06:34.041151 | orchestrator | Monday 19 May 2025 15:06:27 +0000 (0:00:00.286) 0:00:17.040 ************ 2025-05-19 15:06:34.041163 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.041175 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.041188 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.041200 | orchestrator | 2025-05-19 15:06:34.041239 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-19 15:06:34.041252 | orchestrator | Monday 19 May 2025 15:06:27 +0000 (0:00:00.379) 0:00:17.420 ************ 2025-05-19 15:06:34.041264 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.041276 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.041310 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.041323 | orchestrator | 2025-05-19 15:06:34.041335 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-19 15:06:34.041348 | orchestrator | Monday 19 May 2025 15:06:28 +0000 (0:00:00.588) 0:00:18.008 ************ 2025-05-19 15:06:34.041361 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.041373 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.041385 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.041397 | orchestrator | 2025-05-19 15:06:34.041410 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-19 15:06:34.041422 | orchestrator | Monday 19 May 2025 15:06:28 +0000 (0:00:00.283) 0:00:18.292 ************ 2025-05-19 15:06:34.041434 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:34.041444 | orchestrator | skipping: [testbed-node-4] 2025-05-19 15:06:34.041455 | orchestrator | skipping: [testbed-node-5] 2025-05-19 15:06:34.041465 | orchestrator | 2025-05-19 15:06:34.041476 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-19 15:06:34.041487 | orchestrator | Monday 19 May 2025 15:06:28 +0000 (0:00:00.261) 0:00:18.553 ************ 2025-05-19 15:06:34.041498 | orchestrator | ok: [testbed-node-3] 2025-05-19 15:06:34.041508 | orchestrator | ok: [testbed-node-4] 2025-05-19 15:06:34.041519 | orchestrator | ok: [testbed-node-5] 2025-05-19 15:06:34.041529 | orchestrator | 2025-05-19 15:06:34.041540 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 15:06:34.041551 | orchestrator | Monday 19 May 2025 15:06:29 +0000 (0:00:00.433) 0:00:18.986 ************ 2025-05-19 15:06:34.041562 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:34.041573 | orchestrator | 2025-05-19 15:06:34.041589 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 15:06:34.041600 | orchestrator | Monday 19 May 2025 15:06:29 +0000 (0:00:00.228) 0:00:19.215 ************ 2025-05-19 15:06:34.041610 | orchestrator | skipping: [testbed-node-3] 2025-05-19 15:06:34.041621 | orchestrator | 2025-05-19 15:06:34.041632 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 15:06:34.041656 | orchestrator | Monday 19 May 2025 15:06:29 +0000 (0:00:00.223) 0:00:19.439 ************ 2025-05-19 15:06:34.041678 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:34.041689 | orchestrator | 2025-05-19 15:06:34.041700 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 15:06:34.041711 | orchestrator | Monday 19 May 2025 15:06:31 +0000 (0:00:01.647) 0:00:21.087 ************ 2025-05-19 15:06:34.041721 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:34.041732 | orchestrator | 2025-05-19 15:06:34.041743 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 15:06:34.041754 | orchestrator | Monday 19 May 2025 15:06:31 +0000 (0:00:00.247) 0:00:21.335 ************ 2025-05-19 15:06:34.041782 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:34.041793 | orchestrator | 2025-05-19 15:06:34.041804 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:34.041815 | orchestrator | Monday 19 May 2025 15:06:31 +0000 (0:00:00.253) 0:00:21.588 ************ 2025-05-19 15:06:34.041826 | orchestrator | 2025-05-19 15:06:34.041836 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:34.041847 | orchestrator | Monday 19 May 2025 15:06:31 +0000 (0:00:00.065) 0:00:21.654 ************ 2025-05-19 15:06:34.041858 | orchestrator | 2025-05-19 15:06:34.041868 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 15:06:34.041879 | orchestrator | Monday 19 May 2025 15:06:31 +0000 (0:00:00.068) 0:00:21.722 ************ 2025-05-19 15:06:34.041890 | orchestrator | 2025-05-19 15:06:34.041901 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 15:06:34.041911 | orchestrator | Monday 19 May 2025 15:06:32 +0000 (0:00:00.080) 0:00:21.802 ************ 2025-05-19 15:06:34.041930 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 15:06:34.041941 | orchestrator | 2025-05-19 15:06:34.041951 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 15:06:34.041962 | orchestrator | Monday 19 May 2025 15:06:33 +0000 (0:00:01.191) 0:00:22.994 ************ 2025-05-19 15:06:34.041972 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-19 15:06:34.041984 | orchestrator |  "msg": [ 2025-05-19 15:06:34.041995 | orchestrator |  "Validator run completed.", 2025-05-19 15:06:34.042005 | orchestrator |  "You can find the report file here:", 2025-05-19 15:06:34.042067 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-19T15:06:11+00:00-report.json", 2025-05-19 15:06:34.042081 | orchestrator |  "on the following host:", 2025-05-19 15:06:34.042092 | orchestrator |  "testbed-manager" 2025-05-19 15:06:34.042102 | orchestrator |  ] 2025-05-19 15:06:34.042114 | orchestrator | } 2025-05-19 15:06:34.042125 | orchestrator | 2025-05-19 15:06:34.042136 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:06:34.042147 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-19 15:06:34.042160 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 15:06:34.042170 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 15:06:34.042181 | orchestrator | 2025-05-19 15:06:34.042192 | orchestrator | 2025-05-19 15:06:34.042234 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:06:34.042246 | orchestrator | Monday 19 May 2025 15:06:33 +0000 (0:00:00.509) 0:00:23.503 ************ 2025-05-19 15:06:34.042257 | orchestrator | =============================================================================== 2025-05-19 15:06:34.042268 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.50s 2025-05-19 15:06:34.042278 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2025-05-19 15:06:34.042289 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.51s 2025-05-19 15:06:34.042300 | orchestrator | Write report file ------------------------------------------------------- 1.19s 2025-05-19 15:06:34.042310 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-05-19 15:06:34.042321 | orchestrator | Aggregate test results step one ----------------------------------------- 0.61s 2025-05-19 15:06:34.042332 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-05-19 15:06:34.042342 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.59s 2025-05-19 15:06:34.042353 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.59s 2025-05-19 15:06:34.042364 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.51s 2025-05-19 15:06:34.042374 | orchestrator | Print report file information ------------------------------------------- 0.51s 2025-05-19 15:06:34.042385 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2025-05-19 15:06:34.042396 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.46s 2025-05-19 15:06:34.042406 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.45s 2025-05-19 15:06:34.042422 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2025-05-19 15:06:34.042433 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.43s 2025-05-19 15:06:34.042444 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.42s 2025-05-19 15:06:34.042455 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.39s 2025-05-19 15:06:34.042466 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2025-05-19 15:06:34.042485 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.37s 2025-05-19 15:06:34.267771 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-19 15:06:34.276556 | orchestrator | + set -e 2025-05-19 15:06:34.277257 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 15:06:34.277290 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 15:06:34.277302 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 15:06:34.277313 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 15:06:34.277435 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 15:06:34.277521 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 15:06:34.277537 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 15:06:34.277549 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 15:06:34.277559 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 15:06:34.277570 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 15:06:34.277581 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 15:06:34.277591 | orchestrator | ++ export ARA=false 2025-05-19 15:06:34.277602 | orchestrator | ++ ARA=false 2025-05-19 15:06:34.277613 | orchestrator | ++ export TEMPEST=false 2025-05-19 15:06:34.277623 | orchestrator | ++ TEMPEST=false 2025-05-19 15:06:34.277634 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 15:06:34.277644 | orchestrator | ++ IS_ZUUL=true 2025-05-19 15:06:34.277655 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 15:06:34.277666 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2025-05-19 15:06:34.277677 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 15:06:34.277687 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 15:06:34.277698 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 15:06:34.277708 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 15:06:34.277719 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 15:06:34.277729 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 15:06:34.277740 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 15:06:34.277750 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 15:06:34.277761 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-19 15:06:34.277771 | orchestrator | + source /etc/os-release 2025-05-19 15:06:34.277782 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-19 15:06:34.277792 | orchestrator | ++ NAME=Ubuntu 2025-05-19 15:06:34.277803 | orchestrator | ++ VERSION_ID=24.04 2025-05-19 15:06:34.277814 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-19 15:06:34.277824 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-19 15:06:34.277835 | orchestrator | ++ ID=ubuntu 2025-05-19 15:06:34.277846 | orchestrator | ++ ID_LIKE=debian 2025-05-19 15:06:34.277857 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-19 15:06:34.277868 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-19 15:06:34.277879 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-19 15:06:34.277890 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-19 15:06:34.277902 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-19 15:06:34.277912 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-19 15:06:34.277923 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-19 15:06:34.277934 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-19 15:06:34.277947 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-19 15:06:34.300114 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-19 15:06:53.039678 | orchestrator | 2025-05-19 15:06:53.039798 | orchestrator | # Status of Elasticsearch 2025-05-19 15:06:53.039816 | orchestrator | 2025-05-19 15:06:53.039828 | orchestrator | + pushd /opt/configuration/contrib 2025-05-19 15:06:53.039841 | orchestrator | + echo 2025-05-19 15:06:53.039853 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-19 15:06:53.039864 | orchestrator | + echo 2025-05-19 15:06:53.039875 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-19 15:06:53.197106 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-19 15:06:53.197203 | orchestrator | 2025-05-19 15:06:53.197217 | orchestrator | # Status of MariaDB 2025-05-19 15:06:53.197311 | orchestrator | 2025-05-19 15:06:53.197324 | orchestrator | + echo 2025-05-19 15:06:53.197335 | orchestrator | + echo '# Status of MariaDB' 2025-05-19 15:06:53.197346 | orchestrator | + echo 2025-05-19 15:06:53.197356 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-19 15:06:53.197380 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-19 15:06:53.261825 | orchestrator | Reading package lists... 2025-05-19 15:06:53.535541 | orchestrator | Building dependency tree... 2025-05-19 15:06:53.535890 | orchestrator | Reading state information... 2025-05-19 15:06:53.869008 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-19 15:06:53.869115 | orchestrator | bc set to manually installed. 2025-05-19 15:06:53.869132 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 2025-05-19 15:06:54.480176 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-19 15:06:54.480946 | orchestrator | 2025-05-19 15:06:54.480978 | orchestrator | # Status of Prometheus 2025-05-19 15:06:54.480991 | orchestrator | 2025-05-19 15:06:54.481002 | orchestrator | + echo 2025-05-19 15:06:54.481013 | orchestrator | + echo '# Status of Prometheus' 2025-05-19 15:06:54.481024 | orchestrator | + echo 2025-05-19 15:06:54.481035 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-19 15:06:54.530387 | orchestrator | Unauthorized 2025-05-19 15:06:54.533180 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-19 15:06:54.593829 | orchestrator | Unauthorized 2025-05-19 15:06:54.596297 | orchestrator | 2025-05-19 15:06:54.596368 | orchestrator | # Status of RabbitMQ 2025-05-19 15:06:54.596385 | orchestrator | 2025-05-19 15:06:54.596397 | orchestrator | + echo 2025-05-19 15:06:54.596408 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-19 15:06:54.596419 | orchestrator | + echo 2025-05-19 15:06:54.596431 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-19 15:06:55.010437 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-19 15:06:55.019768 | orchestrator | 2025-05-19 15:06:55.019812 | orchestrator | # Status of Redis 2025-05-19 15:06:55.019826 | orchestrator | 2025-05-19 15:06:55.019838 | orchestrator | + echo 2025-05-19 15:06:55.019849 | orchestrator | + echo '# Status of Redis' 2025-05-19 15:06:55.019861 | orchestrator | + echo 2025-05-19 15:06:55.019873 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-19 15:06:55.026303 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001562s;;;0.000000;10.000000 2025-05-19 15:06:55.027212 | orchestrator | 2025-05-19 15:06:55.027268 | orchestrator | # Create backup of MariaDB database 2025-05-19 15:06:55.027282 | orchestrator | 2025-05-19 15:06:55.027295 | orchestrator | + popd 2025-05-19 15:06:55.027308 | orchestrator | + echo 2025-05-19 15:06:55.027319 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-19 15:06:55.027330 | orchestrator | + echo 2025-05-19 15:06:55.027362 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-19 15:06:56.715476 | orchestrator | 2025-05-19 15:06:56 | INFO  | Task c643921f-a6e6-4067-b163-50b912ae8e5a (mariadb_backup) was prepared for execution. 2025-05-19 15:06:56.715585 | orchestrator | 2025-05-19 15:06:56 | INFO  | It takes a moment until task c643921f-a6e6-4067-b163-50b912ae8e5a (mariadb_backup) has been started and output is visible here. 2025-05-19 15:07:00.539282 | orchestrator | 2025-05-19 15:07:00.539641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 15:07:00.540816 | orchestrator | 2025-05-19 15:07:00.540912 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 15:07:00.541676 | orchestrator | Monday 19 May 2025 15:07:00 +0000 (0:00:00.190) 0:00:00.190 ************ 2025-05-19 15:07:00.726708 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:07:00.842451 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:07:00.843432 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:07:00.844102 | orchestrator | 2025-05-19 15:07:00.847641 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 15:07:00.847672 | orchestrator | Monday 19 May 2025 15:07:00 +0000 (0:00:00.305) 0:00:00.495 ************ 2025-05-19 15:07:01.439670 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 15:07:01.441480 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 15:07:01.443050 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 15:07:01.445483 | orchestrator | 2025-05-19 15:07:01.445541 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 15:07:01.450600 | orchestrator | 2025-05-19 15:07:01.450629 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 15:07:01.450828 | orchestrator | Monday 19 May 2025 15:07:01 +0000 (0:00:00.595) 0:00:01.090 ************ 2025-05-19 15:07:01.828207 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 15:07:01.828527 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 15:07:01.829510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 15:07:01.830623 | orchestrator | 2025-05-19 15:07:01.831631 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 15:07:01.832466 | orchestrator | Monday 19 May 2025 15:07:01 +0000 (0:00:00.388) 0:00:01.479 ************ 2025-05-19 15:07:02.319515 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:07:02.321075 | orchestrator | 2025-05-19 15:07:02.321173 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-19 15:07:02.322063 | orchestrator | Monday 19 May 2025 15:07:02 +0000 (0:00:00.492) 0:00:01.971 ************ 2025-05-19 15:07:05.341822 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:07:05.341912 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:07:05.341928 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:07:05.341940 | orchestrator | 2025-05-19 15:07:05.341952 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-19 15:07:05.341964 | orchestrator | Monday 19 May 2025 15:07:05 +0000 (0:00:03.015) 0:00:04.987 ************ 2025-05-19 15:07:22.410905 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 15:07:22.411026 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-19 15:07:22.411042 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 15:07:22.411056 | orchestrator | mariadb_bootstrap_restart 2025-05-19 15:07:22.482824 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:22.484138 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:22.484513 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:07:22.485997 | orchestrator | 2025-05-19 15:07:22.486864 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 15:07:22.488488 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:22.489652 | orchestrator | 2025-05-19 15:07:22.490102 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 15:07:22.491459 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:22.493148 | orchestrator | 2025-05-19 15:07:22.494375 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 15:07:22.495898 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:22.497187 | orchestrator | 2025-05-19 15:07:22.498122 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 15:07:22.499508 | orchestrator | 2025-05-19 15:07:22.500220 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 15:07:22.501464 | orchestrator | Monday 19 May 2025 15:07:22 +0000 (0:00:17.147) 0:00:22.135 ************ 2025-05-19 15:07:22.659720 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:07:22.767424 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:22.767525 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:22.768468 | orchestrator | 2025-05-19 15:07:22.769372 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 15:07:22.773366 | orchestrator | Monday 19 May 2025 15:07:22 +0000 (0:00:00.285) 0:00:22.420 ************ 2025-05-19 15:07:23.089428 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:07:23.130963 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:23.132124 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:23.132575 | orchestrator | 2025-05-19 15:07:23.134213 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:07:23.134285 | orchestrator | 2025-05-19 15:07:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 15:07:23.134305 | orchestrator | 2025-05-19 15:07:23 | INFO  | Please wait and do not abort execution. 2025-05-19 15:07:23.134677 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:07:23.135493 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 15:07:23.135888 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 15:07:23.136834 | orchestrator | 2025-05-19 15:07:23.137408 | orchestrator | 2025-05-19 15:07:23.137823 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:07:23.138423 | orchestrator | Monday 19 May 2025 15:07:23 +0000 (0:00:00.362) 0:00:22.783 ************ 2025-05-19 15:07:23.138732 | orchestrator | =============================================================================== 2025-05-19 15:07:23.139351 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.15s 2025-05-19 15:07:23.139754 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.02s 2025-05-19 15:07:23.140053 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-05-19 15:07:23.140331 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.49s 2025-05-19 15:07:23.140807 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-05-19 15:07:23.141374 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.36s 2025-05-19 15:07:23.141781 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-19 15:07:23.142627 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-05-19 15:07:23.586411 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-19 15:07:25.251702 | orchestrator | 2025-05-19 15:07:25 | INFO  | Task 51608e31-7ac0-4e90-8d60-d1067ce325c8 (mariadb_backup) was prepared for execution. 2025-05-19 15:07:25.251824 | orchestrator | 2025-05-19 15:07:25 | INFO  | It takes a moment until task 51608e31-7ac0-4e90-8d60-d1067ce325c8 (mariadb_backup) has been started and output is visible here. 2025-05-19 15:07:29.015766 | orchestrator | 2025-05-19 15:07:29.015866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 15:07:29.016324 | orchestrator | 2025-05-19 15:07:29.017651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 15:07:29.018333 | orchestrator | Monday 19 May 2025 15:07:29 +0000 (0:00:00.146) 0:00:00.146 ************ 2025-05-19 15:07:29.150450 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:07:29.267808 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:07:29.268862 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:07:29.272134 | orchestrator | 2025-05-19 15:07:29.272632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 15:07:29.273677 | orchestrator | Monday 19 May 2025 15:07:29 +0000 (0:00:00.256) 0:00:00.403 ************ 2025-05-19 15:07:29.717637 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 15:07:29.717723 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 15:07:29.718160 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 15:07:29.718653 | orchestrator | 2025-05-19 15:07:29.719357 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 15:07:29.719647 | orchestrator | 2025-05-19 15:07:29.720046 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 15:07:29.720516 | orchestrator | Monday 19 May 2025 15:07:29 +0000 (0:00:00.446) 0:00:00.849 ************ 2025-05-19 15:07:30.065326 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 15:07:30.069747 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 15:07:30.071111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 15:07:30.071455 | orchestrator | 2025-05-19 15:07:30.072643 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 15:07:30.073377 | orchestrator | Monday 19 May 2025 15:07:30 +0000 (0:00:00.349) 0:00:01.199 ************ 2025-05-19 15:07:30.523900 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 15:07:30.525042 | orchestrator | 2025-05-19 15:07:30.528564 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-19 15:07:30.528603 | orchestrator | Monday 19 May 2025 15:07:30 +0000 (0:00:00.460) 0:00:01.659 ************ 2025-05-19 15:07:33.552428 | orchestrator | ok: [testbed-node-0] 2025-05-19 15:07:33.555753 | orchestrator | ok: [testbed-node-2] 2025-05-19 15:07:33.556713 | orchestrator | ok: [testbed-node-1] 2025-05-19 15:07:33.557343 | orchestrator | 2025-05-19 15:07:33.560901 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-19 15:07:33.561138 | orchestrator | Monday 19 May 2025 15:07:33 +0000 (0:00:03.024) 0:00:04.684 ************ 2025-05-19 15:07:50.665396 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 15:07:50.665515 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-19 15:07:50.665532 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 15:07:50.666458 | orchestrator | mariadb_bootstrap_restart 2025-05-19 15:07:50.739022 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:50.739115 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:50.740151 | orchestrator | changed: [testbed-node-0] 2025-05-19 15:07:50.741315 | orchestrator | 2025-05-19 15:07:50.745229 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 15:07:50.746134 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:50.746719 | orchestrator | 2025-05-19 15:07:50.747372 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 15:07:50.748843 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:50.748892 | orchestrator | 2025-05-19 15:07:50.749589 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 15:07:50.750309 | orchestrator | skipping: no hosts matched 2025-05-19 15:07:50.750630 | orchestrator | 2025-05-19 15:07:50.751549 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 15:07:50.752053 | orchestrator | 2025-05-19 15:07:50.752408 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 15:07:50.753064 | orchestrator | Monday 19 May 2025 15:07:50 +0000 (0:00:17.189) 0:00:21.873 ************ 2025-05-19 15:07:50.913787 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:07:51.023654 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:51.023941 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:51.024520 | orchestrator | 2025-05-19 15:07:51.025066 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 15:07:51.025698 | orchestrator | Monday 19 May 2025 15:07:51 +0000 (0:00:00.284) 0:00:22.158 ************ 2025-05-19 15:07:51.343403 | orchestrator | skipping: [testbed-node-0] 2025-05-19 15:07:51.379756 | orchestrator | skipping: [testbed-node-1] 2025-05-19 15:07:51.380132 | orchestrator | skipping: [testbed-node-2] 2025-05-19 15:07:51.380487 | orchestrator | 2025-05-19 15:07:51.381187 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:07:51.381621 | orchestrator | 2025-05-19 15:07:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 15:07:51.381691 | orchestrator | 2025-05-19 15:07:51 | INFO  | Please wait and do not abort execution. 2025-05-19 15:07:51.382157 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 15:07:51.382917 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 15:07:51.383357 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 15:07:51.383744 | orchestrator | 2025-05-19 15:07:51.384436 | orchestrator | 2025-05-19 15:07:51.385000 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:07:51.385772 | orchestrator | Monday 19 May 2025 15:07:51 +0000 (0:00:00.356) 0:00:22.515 ************ 2025-05-19 15:07:51.385854 | orchestrator | =============================================================================== 2025-05-19 15:07:51.386476 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 17.19s 2025-05-19 15:07:51.387385 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.02s 2025-05-19 15:07:51.389017 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.46s 2025-05-19 15:07:51.389319 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-05-19 15:07:51.390469 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.36s 2025-05-19 15:07:51.390861 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.35s 2025-05-19 15:07:51.391843 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2025-05-19 15:07:51.394238 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-05-19 15:07:51.857372 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-05-19 15:07:51.864231 | orchestrator | + set -e 2025-05-19 15:07:51.864428 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 15:07:51.864449 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 15:07:51.864461 | orchestrator | ++ INTERACTIVE=false 2025-05-19 15:07:51.864472 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 15:07:51.864482 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 15:07:51.864494 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 15:07:51.865561 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 15:07:51.871651 | orchestrator | 2025-05-19 15:07:51.871707 | orchestrator | # OpenStack endpoints 2025-05-19 15:07:51.871719 | orchestrator | 2025-05-19 15:07:51.871730 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 15:07:51.871740 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 15:07:51.871750 | orchestrator | + export OS_CLOUD=admin 2025-05-19 15:07:51.871759 | orchestrator | + OS_CLOUD=admin 2025-05-19 15:07:51.871769 | orchestrator | + echo 2025-05-19 15:07:51.871778 | orchestrator | + echo '# OpenStack endpoints' 2025-05-19 15:07:51.871788 | orchestrator | + echo 2025-05-19 15:07:51.871797 | orchestrator | + openstack endpoint list 2025-05-19 15:07:54.974643 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 15:07:54.974754 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-05-19 15:07:54.974772 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 15:07:54.974784 | orchestrator | | 1312dc4a4fda4ab885c0a36fc7032478 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-05-19 15:07:54.974796 | orchestrator | | 2188c70cc6aa4bec8b6d942aa094d63b | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-05-19 15:07:54.974827 | orchestrator | | 331b855f914f43a5b822b4776141a28b | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-05-19 15:07:54.974856 | orchestrator | | 42e82fe7e0d54f9090ccc6c90ed0bea0 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-05-19 15:07:54.974883 | orchestrator | | 4e25768cb9b0407e80e3917170231286 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-05-19 15:07:54.974894 | orchestrator | | 58b8d217d027444b8c2c5ea83454bc7b | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-05-19 15:07:54.974905 | orchestrator | | 5ee4a0227a5e4f1a8e4facaa22167ffc | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-05-19 15:07:54.974916 | orchestrator | | 66e658c3acb4496c8065d17e8f3bf0ac | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-05-19 15:07:54.974926 | orchestrator | | 68549c5069a546b791728f4106507423 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-05-19 15:07:54.974937 | orchestrator | | 6b405ea13b474d04a86700eb317a8e84 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-05-19 15:07:54.974948 | orchestrator | | 78f7227879254247baaedf993f774778 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-05-19 15:07:54.974958 | orchestrator | | 7a4963f62326477dababfac4367c2d32 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-05-19 15:07:54.974969 | orchestrator | | 81095b930096444a83e0860a4e3f972d | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-05-19 15:07:54.974980 | orchestrator | | b90940bd42b5440d8d68b4ed60bfdaf7 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-05-19 15:07:54.974991 | orchestrator | | bd45100869d54f80b08e4c698ccd57b6 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-05-19 15:07:54.975002 | orchestrator | | cf2f1108a0074d4b8ff8d50c6c98cbcd | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-05-19 15:07:54.975012 | orchestrator | | d50a22ec9cbe4eb989376f24392f2f4b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-05-19 15:07:54.975023 | orchestrator | | da2e83c4a962419da9b3f7b36876b033 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-05-19 15:07:54.975034 | orchestrator | | db32f69bfa4b47bda009e99a4950c3eb | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-05-19 15:07:54.975045 | orchestrator | | dc1e6e3b843f40bc9d91434039b5c38d | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-05-19 15:07:54.975075 | orchestrator | | dd3dfcb3c30e480cb4614a49a9eab666 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-05-19 15:07:54.975086 | orchestrator | | e5fa9d1be23542a0bcab35bdfd0397b4 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-05-19 15:07:54.975105 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 15:07:55.208222 | orchestrator | 2025-05-19 15:07:55.208427 | orchestrator | # Cinder 2025-05-19 15:07:55.208457 | orchestrator | 2025-05-19 15:07:55.208480 | orchestrator | + echo 2025-05-19 15:07:55.208500 | orchestrator | + echo '# Cinder' 2025-05-19 15:07:55.208519 | orchestrator | + echo 2025-05-19 15:07:55.208539 | orchestrator | + openstack volume service list 2025-05-19 15:07:58.252155 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 15:07:58.252247 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-05-19 15:07:58.252256 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 15:07:58.252263 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-05-19T15:07:54.000000 | 2025-05-19 15:07:58.252267 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-05-19T15:07:55.000000 | 2025-05-19 15:07:58.252272 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-05-19T15:07:56.000000 | 2025-05-19 15:07:58.252319 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-05-19T15:07:51.000000 | 2025-05-19 15:07:58.252324 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-05-19T15:07:51.000000 | 2025-05-19 15:07:58.252328 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-05-19T15:07:52.000000 | 2025-05-19 15:07:58.252332 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-05-19T15:07:54.000000 | 2025-05-19 15:07:58.252336 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-05-19T15:07:55.000000 | 2025-05-19 15:07:58.252339 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-05-19T15:07:55.000000 | 2025-05-19 15:07:58.252343 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 15:07:58.468187 | orchestrator | 2025-05-19 15:07:58.468347 | orchestrator | # Neutron 2025-05-19 15:07:58.468365 | orchestrator | 2025-05-19 15:07:58.468377 | orchestrator | + echo 2025-05-19 15:07:58.468389 | orchestrator | + echo '# Neutron' 2025-05-19 15:07:58.468401 | orchestrator | + echo 2025-05-19 15:07:58.468411 | orchestrator | + openstack network agent list 2025-05-19 15:08:01.245161 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 15:08:01.245270 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-05-19 15:08:01.245285 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 15:08:01.245362 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245375 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245386 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245397 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245408 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245419 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-05-19 15:08:01.245461 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 15:08:01.245473 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 15:08:01.245484 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 15:08:01.245494 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 15:08:01.490795 | orchestrator | + openstack network service provider list 2025-05-19 15:08:04.023446 | orchestrator | +---------------+------+---------+ 2025-05-19 15:08:04.023562 | orchestrator | | Service Type | Name | Default | 2025-05-19 15:08:04.023578 | orchestrator | +---------------+------+---------+ 2025-05-19 15:08:04.023590 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-05-19 15:08:04.023601 | orchestrator | +---------------+------+---------+ 2025-05-19 15:08:04.272471 | orchestrator | 2025-05-19 15:08:04.272553 | orchestrator | # Nova 2025-05-19 15:08:04.272563 | orchestrator | 2025-05-19 15:08:04.272572 | orchestrator | + echo 2025-05-19 15:08:04.272579 | orchestrator | + echo '# Nova' 2025-05-19 15:08:04.272587 | orchestrator | + echo 2025-05-19 15:08:04.272595 | orchestrator | + openstack compute service list 2025-05-19 15:08:07.452750 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 15:08:07.452858 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-05-19 15:08:07.452874 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 15:08:07.452887 | orchestrator | | 1e26b1fe-952b-4feb-b81f-3e73cdd59c7f | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-05-19T15:07:57.000000 | 2025-05-19 15:08:07.452899 | orchestrator | | 18beb1e7-76f7-48ff-8ca6-9ea8badaf698 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-05-19T15:08:00.000000 | 2025-05-19 15:08:07.452909 | orchestrator | | f5290e8e-684d-445c-974f-d3a83263e804 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-05-19T15:08:03.000000 | 2025-05-19 15:08:07.452920 | orchestrator | | a839cc20-41d9-4c4d-a93d-f73a2c6ea2b3 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-05-19T15:08:01.000000 | 2025-05-19 15:08:07.452931 | orchestrator | | 9b32a0b3-b34c-4413-ade9-f12236b8ed7e | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-05-19T15:08:02.000000 | 2025-05-19 15:08:07.452941 | orchestrator | | 555db3ba-546b-4c46-a1b6-1d20ff6b5812 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-05-19T15:08:02.000000 | 2025-05-19 15:08:07.452952 | orchestrator | | fca84e72-3179-45ed-bdb6-e80035777af3 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-05-19T15:08:03.000000 | 2025-05-19 15:08:07.452963 | orchestrator | | 031fde2d-3c95-46f2-bf28-6f0755333d3c | nova-compute | testbed-node-4 | nova | enabled | up | 2025-05-19T15:08:04.000000 | 2025-05-19 15:08:07.452974 | orchestrator | | c048800b-dd21-4a32-899c-472ea288e81f | nova-compute | testbed-node-3 | nova | enabled | up | 2025-05-19T15:08:04.000000 | 2025-05-19 15:08:07.452986 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 15:08:07.676747 | orchestrator | + openstack hypervisor list 2025-05-19 15:08:11.927917 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 15:08:11.928033 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-05-19 15:08:11.928048 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 15:08:11.928083 | orchestrator | | 67fd2a3a-a55b-446e-9f6f-b4d09931f3f2 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-05-19 15:08:11.928094 | orchestrator | | eb89ccd2-2e3b-44f3-9a02-3c16819e49bb | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-05-19 15:08:11.928105 | orchestrator | | f3489b3f-0419-4191-bdbd-ed313038358a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-05-19 15:08:11.928116 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 15:08:12.153475 | orchestrator | 2025-05-19 15:08:12.153569 | orchestrator | # Run OpenStack test play 2025-05-19 15:08:12.153584 | orchestrator | 2025-05-19 15:08:12.153596 | orchestrator | + echo 2025-05-19 15:08:12.153607 | orchestrator | + echo '# Run OpenStack test play' 2025-05-19 15:08:12.153619 | orchestrator | + echo 2025-05-19 15:08:12.153631 | orchestrator | + osism apply --environment openstack test 2025-05-19 15:08:13.753829 | orchestrator | 2025-05-19 15:08:13 | INFO  | Trying to run play test in environment openstack 2025-05-19 15:08:13.811832 | orchestrator | 2025-05-19 15:08:13 | INFO  | Task 082411c2-d934-4404-86bb-d3d8fd9ab695 (test) was prepared for execution. 2025-05-19 15:08:13.811919 | orchestrator | 2025-05-19 15:08:13 | INFO  | It takes a moment until task 082411c2-d934-4404-86bb-d3d8fd9ab695 (test) has been started and output is visible here. 2025-05-19 15:08:17.545548 | orchestrator | 2025-05-19 15:08:17.546743 | orchestrator | PLAY [Create test project] ***************************************************** 2025-05-19 15:08:17.547528 | orchestrator | 2025-05-19 15:08:17.548435 | orchestrator | TASK [Create test domain] ****************************************************** 2025-05-19 15:08:17.549098 | orchestrator | Monday 19 May 2025 15:08:17 +0000 (0:00:00.055) 0:00:00.055 ************ 2025-05-19 15:08:20.352428 | orchestrator | changed: [localhost] 2025-05-19 15:08:20.353643 | orchestrator | 2025-05-19 15:08:20.354326 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-05-19 15:08:20.355079 | orchestrator | Monday 19 May 2025 15:08:20 +0000 (0:00:02.809) 0:00:02.865 ************ 2025-05-19 15:08:24.169612 | orchestrator | changed: [localhost] 2025-05-19 15:08:24.169720 | orchestrator | 2025-05-19 15:08:24.169987 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-05-19 15:08:24.170856 | orchestrator | Monday 19 May 2025 15:08:24 +0000 (0:00:03.814) 0:00:06.679 ************ 2025-05-19 15:08:30.010228 | orchestrator | changed: [localhost] 2025-05-19 15:08:30.010858 | orchestrator | 2025-05-19 15:08:30.012912 | orchestrator | TASK [Create test project] ***************************************************** 2025-05-19 15:08:30.013744 | orchestrator | Monday 19 May 2025 15:08:30 +0000 (0:00:05.840) 0:00:12.520 ************ 2025-05-19 15:08:33.908396 | orchestrator | changed: [localhost] 2025-05-19 15:08:33.908500 | orchestrator | 2025-05-19 15:08:33.909041 | orchestrator | TASK [Create test user] ******************************************************** 2025-05-19 15:08:33.910656 | orchestrator | Monday 19 May 2025 15:08:33 +0000 (0:00:03.897) 0:00:16.417 ************ 2025-05-19 15:08:37.935793 | orchestrator | changed: [localhost] 2025-05-19 15:08:37.935905 | orchestrator | 2025-05-19 15:08:37.937064 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-05-19 15:08:37.937981 | orchestrator | Monday 19 May 2025 15:08:37 +0000 (0:00:04.025) 0:00:20.443 ************ 2025-05-19 15:08:49.512899 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-05-19 15:08:49.513049 | orchestrator | changed: [localhost] => (item=member) 2025-05-19 15:08:49.513077 | orchestrator | changed: [localhost] => (item=creator) 2025-05-19 15:08:49.513546 | orchestrator | 2025-05-19 15:08:49.513957 | orchestrator | TASK [Create test server group] ************************************************ 2025-05-19 15:08:49.514886 | orchestrator | Monday 19 May 2025 15:08:49 +0000 (0:00:11.575) 0:00:32.018 ************ 2025-05-19 15:08:53.815610 | orchestrator | changed: [localhost] 2025-05-19 15:08:53.816673 | orchestrator | 2025-05-19 15:08:53.818003 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-05-19 15:08:53.818899 | orchestrator | Monday 19 May 2025 15:08:53 +0000 (0:00:04.306) 0:00:36.325 ************ 2025-05-19 15:08:58.749141 | orchestrator | changed: [localhost] 2025-05-19 15:08:58.749492 | orchestrator | 2025-05-19 15:08:58.750685 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-05-19 15:08:58.751290 | orchestrator | Monday 19 May 2025 15:08:58 +0000 (0:00:04.932) 0:00:41.257 ************ 2025-05-19 15:09:02.916573 | orchestrator | changed: [localhost] 2025-05-19 15:09:02.917029 | orchestrator | 2025-05-19 15:09:02.917314 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-05-19 15:09:02.917985 | orchestrator | Monday 19 May 2025 15:09:02 +0000 (0:00:04.170) 0:00:45.428 ************ 2025-05-19 15:09:06.865624 | orchestrator | changed: [localhost] 2025-05-19 15:09:06.865735 | orchestrator | 2025-05-19 15:09:06.866565 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-05-19 15:09:06.867931 | orchestrator | Monday 19 May 2025 15:09:06 +0000 (0:00:03.946) 0:00:49.374 ************ 2025-05-19 15:09:10.852108 | orchestrator | changed: [localhost] 2025-05-19 15:09:10.852999 | orchestrator | 2025-05-19 15:09:10.853490 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-05-19 15:09:10.853974 | orchestrator | Monday 19 May 2025 15:09:10 +0000 (0:00:03.986) 0:00:53.361 ************ 2025-05-19 15:09:14.651329 | orchestrator | changed: [localhost] 2025-05-19 15:09:14.651543 | orchestrator | 2025-05-19 15:09:14.653827 | orchestrator | TASK [Create test network topology] ******************************************** 2025-05-19 15:09:14.655031 | orchestrator | Monday 19 May 2025 15:09:14 +0000 (0:00:03.796) 0:00:57.157 ************ 2025-05-19 15:09:28.153694 | orchestrator | changed: [localhost] 2025-05-19 15:09:28.153812 | orchestrator | 2025-05-19 15:09:28.153830 | orchestrator | TASK [Create test instances] *************************************************** 2025-05-19 15:09:28.153844 | orchestrator | Monday 19 May 2025 15:09:28 +0000 (0:00:13.502) 0:01:10.660 ************ 2025-05-19 15:11:47.454923 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 15:11:47.455059 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 15:11:47.455076 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 15:11:47.455863 | orchestrator | 2025-05-19 15:11:47.456635 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-05-19 15:12:17.454269 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 15:12:17.454420 | orchestrator | 2025-05-19 15:12:17.454436 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-05-19 15:12:38.894639 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 15:12:38.894764 | orchestrator | 2025-05-19 15:12:38.894781 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-05-19 15:12:38.895873 | orchestrator | Monday 19 May 2025 15:12:38 +0000 (0:03:10.740) 0:04:21.400 ************ 2025-05-19 15:13:01.743198 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 15:13:01.743312 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 15:13:01.743326 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 15:13:01.744310 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 15:13:01.744551 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 15:13:01.745218 | orchestrator | 2025-05-19 15:13:01.745741 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-05-19 15:13:01.746791 | orchestrator | Monday 19 May 2025 15:13:01 +0000 (0:00:22.849) 0:04:44.250 ************ 2025-05-19 15:13:33.090068 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 15:13:33.090278 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 15:13:33.090300 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 15:13:33.090388 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 15:13:33.090403 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 15:13:33.091287 | orchestrator | 2025-05-19 15:13:33.092542 | orchestrator | TASK [Create test volume] ****************************************************** 2025-05-19 15:13:33.093788 | orchestrator | Monday 19 May 2025 15:13:33 +0000 (0:00:31.339) 0:05:15.590 ************ 2025-05-19 15:13:39.698818 | orchestrator | changed: [localhost] 2025-05-19 15:13:39.699194 | orchestrator | 2025-05-19 15:13:39.699226 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-05-19 15:13:39.699918 | orchestrator | Monday 19 May 2025 15:13:39 +0000 (0:00:06.619) 0:05:22.209 ************ 2025-05-19 15:13:52.990324 | orchestrator | changed: [localhost] 2025-05-19 15:13:52.990489 | orchestrator | 2025-05-19 15:13:52.990521 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-05-19 15:13:52.990543 | orchestrator | Monday 19 May 2025 15:13:52 +0000 (0:00:13.282) 0:05:35.492 ************ 2025-05-19 15:13:57.866883 | orchestrator | ok: [localhost] 2025-05-19 15:13:57.867866 | orchestrator | 2025-05-19 15:13:57.868779 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-05-19 15:13:57.869030 | orchestrator | Monday 19 May 2025 15:13:57 +0000 (0:00:04.885) 0:05:40.378 ************ 2025-05-19 15:13:57.894889 | orchestrator | ok: [localhost] => { 2025-05-19 15:13:57.895677 | orchestrator |  "msg": "192.168.112.176" 2025-05-19 15:13:57.896782 | orchestrator | } 2025-05-19 15:13:57.897627 | orchestrator | 2025-05-19 15:13:57.898774 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 15:13:57.899141 | orchestrator | 2025-05-19 15:13:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 15:13:57.899632 | orchestrator | 2025-05-19 15:13:57 | INFO  | Please wait and do not abort execution. 2025-05-19 15:13:57.900974 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 15:13:57.901338 | orchestrator | 2025-05-19 15:13:57.901952 | orchestrator | 2025-05-19 15:13:57.903557 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 15:13:57.904268 | orchestrator | Monday 19 May 2025 15:13:57 +0000 (0:00:00.027) 0:05:40.405 ************ 2025-05-19 15:13:57.904931 | orchestrator | =============================================================================== 2025-05-19 15:13:57.905092 | orchestrator | Create test instances ------------------------------------------------- 190.74s 2025-05-19 15:13:57.905723 | orchestrator | Add tag to instances --------------------------------------------------- 31.34s 2025-05-19 15:13:57.905909 | orchestrator | Add metadata to instances ---------------------------------------------- 22.85s 2025-05-19 15:13:57.906509 | orchestrator | Create test network topology ------------------------------------------- 13.50s 2025-05-19 15:13:57.906819 | orchestrator | Attach test volume ----------------------------------------------------- 13.28s 2025-05-19 15:13:57.907079 | orchestrator | Add member roles to user test ------------------------------------------ 11.58s 2025-05-19 15:13:57.907508 | orchestrator | Create test volume ------------------------------------------------------ 6.62s 2025-05-19 15:13:57.908070 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.84s 2025-05-19 15:13:57.908837 | orchestrator | Create ssh security group ----------------------------------------------- 4.93s 2025-05-19 15:13:57.909622 | orchestrator | Create floating ip address ---------------------------------------------- 4.89s 2025-05-19 15:13:57.909854 | orchestrator | Create test server group ------------------------------------------------ 4.31s 2025-05-19 15:13:57.910266 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.17s 2025-05-19 15:13:57.910587 | orchestrator | Create test user -------------------------------------------------------- 4.03s 2025-05-19 15:13:57.911016 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.99s 2025-05-19 15:13:57.911356 | orchestrator | Create icmp security group ---------------------------------------------- 3.95s 2025-05-19 15:13:57.911738 | orchestrator | Create test project ----------------------------------------------------- 3.90s 2025-05-19 15:13:57.912137 | orchestrator | Create test-admin user -------------------------------------------------- 3.81s 2025-05-19 15:13:57.912619 | orchestrator | Create test keypair ----------------------------------------------------- 3.80s 2025-05-19 15:13:57.912932 | orchestrator | Create test domain ------------------------------------------------------ 2.81s 2025-05-19 15:13:57.913257 | orchestrator | Print floating ip address ----------------------------------------------- 0.03s 2025-05-19 15:13:58.207494 | orchestrator | + server_list 2025-05-19 15:13:58.207583 | orchestrator | + openstack --os-cloud test server list 2025-05-19 15:14:01.909162 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 15:14:01.909271 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-05-19 15:14:01.909287 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 15:14:01.909298 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | auto_allocated_network=10.42.0.31, 192.168.112.148 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 15:14:01.909309 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.117 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 15:14:01.909320 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | auto_allocated_network=10.42.0.21, 192.168.112.177 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 15:14:01.909331 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | auto_allocated_network=10.42.0.4, 192.168.112.115 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 15:14:01.909342 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | auto_allocated_network=10.42.0.45, 192.168.112.176 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 15:14:01.909353 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 15:14:02.142786 | orchestrator | + openstack --os-cloud test server show test 2025-05-19 15:14:05.433372 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:05.433548 | orchestrator | | Field | Value | 2025-05-19 15:14:05.433566 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:05.433578 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 15:14:05.433597 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 15:14:05.433609 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 15:14:05.433638 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-05-19 15:14:05.433650 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 15:14:05.433661 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 15:14:05.433671 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 15:14:05.433682 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 15:14:05.433710 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 15:14:05.433721 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 15:14:05.433732 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 15:14:05.433743 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 15:14:05.433754 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 15:14:05.433770 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 15:14:05.433787 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 15:14:05.433798 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T15:09:57.000000 | 2025-05-19 15:14:05.433809 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 15:14:05.433820 | orchestrator | | accessIPv4 | | 2025-05-19 15:14:05.433831 | orchestrator | | accessIPv6 | | 2025-05-19 15:14:05.433842 | orchestrator | | addresses | auto_allocated_network=10.42.0.45, 192.168.112.176 | 2025-05-19 15:14:05.433860 | orchestrator | | config_drive | | 2025-05-19 15:14:05.433871 | orchestrator | | created | 2025-05-19T15:09:35Z | 2025-05-19 15:14:05.433882 | orchestrator | | description | None | 2025-05-19 15:14:05.433895 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 15:14:05.433907 | orchestrator | | hostId | ed8ac4f430f263a1e2ca0fadd11b4c0fdce024c1f7e4fb119c0d511f | 2025-05-19 15:14:05.433925 | orchestrator | | host_status | None | 2025-05-19 15:14:05.433938 | orchestrator | | id | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | 2025-05-19 15:14:05.433950 | orchestrator | | image | Cirros 0.6.2 (518bd81c-7d7f-4641-9c88-4eb559358b31) | 2025-05-19 15:14:05.433963 | orchestrator | | key_name | test | 2025-05-19 15:14:05.433975 | orchestrator | | locked | False | 2025-05-19 15:14:05.433987 | orchestrator | | locked_reason | None | 2025-05-19 15:14:05.434000 | orchestrator | | name | test | 2025-05-19 15:14:05.434115 | orchestrator | | pinned_availability_zone | None | 2025-05-19 15:14:05.434135 | orchestrator | | progress | 0 | 2025-05-19 15:14:05.434148 | orchestrator | | project_id | 06d650c252a647df950e244c5e2c3934 | 2025-05-19 15:14:05.434161 | orchestrator | | properties | hostname='test' | 2025-05-19 15:14:05.434185 | orchestrator | | security_groups | name='ssh' | 2025-05-19 15:14:05.434199 | orchestrator | | | name='icmp' | 2025-05-19 15:14:05.434211 | orchestrator | | server_groups | None | 2025-05-19 15:14:05.434224 | orchestrator | | status | ACTIVE | 2025-05-19 15:14:05.434236 | orchestrator | | tags | test | 2025-05-19 15:14:05.434248 | orchestrator | | trusted_image_certificates | None | 2025-05-19 15:14:05.434259 | orchestrator | | updated | 2025-05-19T15:12:43Z | 2025-05-19 15:14:05.434277 | orchestrator | | user_id | 002d3f56089b495f8f53df236ccd0e4d | 2025-05-19 15:14:05.434288 | orchestrator | | volumes_attached | delete_on_termination='False', id='836e31d5-e149-4caa-b338-a697ab75bf8f' | 2025-05-19 15:14:05.438449 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:05.671027 | orchestrator | + openstack --os-cloud test server show test-1 2025-05-19 15:14:08.698460 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:08.698591 | orchestrator | | Field | Value | 2025-05-19 15:14:08.698624 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:08.698637 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 15:14:08.698648 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 15:14:08.698659 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 15:14:08.698670 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-05-19 15:14:08.698681 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 15:14:08.698691 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 15:14:08.698702 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 15:14:08.698713 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 15:14:08.698762 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 15:14:08.698774 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 15:14:08.698785 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 15:14:08.698802 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 15:14:08.698833 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 15:14:08.698874 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 15:14:08.698886 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 15:14:08.698897 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T15:10:37.000000 | 2025-05-19 15:14:08.698907 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 15:14:08.698919 | orchestrator | | accessIPv4 | | 2025-05-19 15:14:08.698930 | orchestrator | | accessIPv6 | | 2025-05-19 15:14:08.698949 | orchestrator | | addresses | auto_allocated_network=10.42.0.4, 192.168.112.115 | 2025-05-19 15:14:08.698967 | orchestrator | | config_drive | | 2025-05-19 15:14:08.698979 | orchestrator | | created | 2025-05-19T15:10:16Z | 2025-05-19 15:14:08.698995 | orchestrator | | description | None | 2025-05-19 15:14:08.699007 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 15:14:08.699017 | orchestrator | | hostId | ef127ca1b28707af866dede71732342d55db415434fc6e0f792437fb | 2025-05-19 15:14:08.699028 | orchestrator | | host_status | None | 2025-05-19 15:14:08.699039 | orchestrator | | id | 283d4ae5-cac8-4547-9575-80bc63830c83 | 2025-05-19 15:14:08.699050 | orchestrator | | image | Cirros 0.6.2 (518bd81c-7d7f-4641-9c88-4eb559358b31) | 2025-05-19 15:14:08.699060 | orchestrator | | key_name | test | 2025-05-19 15:14:08.699071 | orchestrator | | locked | False | 2025-05-19 15:14:08.699096 | orchestrator | | locked_reason | None | 2025-05-19 15:14:08.699107 | orchestrator | | name | test-1 | 2025-05-19 15:14:08.699125 | orchestrator | | pinned_availability_zone | None | 2025-05-19 15:14:08.699136 | orchestrator | | progress | 0 | 2025-05-19 15:14:08.699152 | orchestrator | | project_id | 06d650c252a647df950e244c5e2c3934 | 2025-05-19 15:14:08.699163 | orchestrator | | properties | hostname='test-1' | 2025-05-19 15:14:08.699174 | orchestrator | | security_groups | name='ssh' | 2025-05-19 15:14:08.699185 | orchestrator | | | name='icmp' | 2025-05-19 15:14:08.699196 | orchestrator | | server_groups | None | 2025-05-19 15:14:08.699207 | orchestrator | | status | ACTIVE | 2025-05-19 15:14:08.699227 | orchestrator | | tags | test | 2025-05-19 15:14:08.699238 | orchestrator | | trusted_image_certificates | None | 2025-05-19 15:14:08.699249 | orchestrator | | updated | 2025-05-19T15:12:48Z | 2025-05-19 15:14:08.699265 | orchestrator | | user_id | 002d3f56089b495f8f53df236ccd0e4d | 2025-05-19 15:14:08.699276 | orchestrator | | volumes_attached | | 2025-05-19 15:14:08.700569 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:08.944768 | orchestrator | + openstack --os-cloud test server show test-2 2025-05-19 15:14:11.955454 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:11.955649 | orchestrator | | Field | Value | 2025-05-19 15:14:11.955667 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:11.955680 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 15:14:11.955691 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 15:14:11.955726 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 15:14:11.955737 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-05-19 15:14:11.955748 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 15:14:11.955759 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 15:14:11.955770 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 15:14:11.955781 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 15:14:11.955816 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 15:14:11.955828 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 15:14:11.955839 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 15:14:11.955850 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 15:14:11.955861 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 15:14:11.955879 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 15:14:11.955890 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 15:14:11.955901 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T15:11:16.000000 | 2025-05-19 15:14:11.955912 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 15:14:11.955923 | orchestrator | | accessIPv4 | | 2025-05-19 15:14:11.955933 | orchestrator | | accessIPv6 | | 2025-05-19 15:14:11.955945 | orchestrator | | addresses | auto_allocated_network=10.42.0.21, 192.168.112.177 | 2025-05-19 15:14:11.955968 | orchestrator | | config_drive | | 2025-05-19 15:14:11.955980 | orchestrator | | created | 2025-05-19T15:10:56Z | 2025-05-19 15:14:11.955991 | orchestrator | | description | None | 2025-05-19 15:14:11.956009 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 15:14:11.956020 | orchestrator | | hostId | 72df7740f40f5670f2940f0af2d8abc02d548cab092e910af59e7eda | 2025-05-19 15:14:11.956031 | orchestrator | | host_status | None | 2025-05-19 15:14:11.956043 | orchestrator | | id | 33055056-a68f-4dff-b938-428c5f68477e | 2025-05-19 15:14:11.956064 | orchestrator | | image | Cirros 0.6.2 (518bd81c-7d7f-4641-9c88-4eb559358b31) | 2025-05-19 15:14:11.956085 | orchestrator | | key_name | test | 2025-05-19 15:14:11.956105 | orchestrator | | locked | False | 2025-05-19 15:14:11.956125 | orchestrator | | locked_reason | None | 2025-05-19 15:14:11.956160 | orchestrator | | name | test-2 | 2025-05-19 15:14:11.956193 | orchestrator | | pinned_availability_zone | None | 2025-05-19 15:14:11.956214 | orchestrator | | progress | 0 | 2025-05-19 15:14:11.956239 | orchestrator | | project_id | 06d650c252a647df950e244c5e2c3934 | 2025-05-19 15:14:11.956250 | orchestrator | | properties | hostname='test-2' | 2025-05-19 15:14:11.956261 | orchestrator | | security_groups | name='ssh' | 2025-05-19 15:14:11.956271 | orchestrator | | | name='icmp' | 2025-05-19 15:14:11.956282 | orchestrator | | server_groups | None | 2025-05-19 15:14:11.956293 | orchestrator | | status | ACTIVE | 2025-05-19 15:14:11.956304 | orchestrator | | tags | test | 2025-05-19 15:14:11.956314 | orchestrator | | trusted_image_certificates | None | 2025-05-19 15:14:11.956325 | orchestrator | | updated | 2025-05-19T15:12:52Z | 2025-05-19 15:14:11.956347 | orchestrator | | user_id | 002d3f56089b495f8f53df236ccd0e4d | 2025-05-19 15:14:11.956359 | orchestrator | | volumes_attached | | 2025-05-19 15:14:11.964095 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:12.203668 | orchestrator | + openstack --os-cloud test server show test-3 2025-05-19 15:14:15.234624 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:15.234766 | orchestrator | | Field | Value | 2025-05-19 15:14:15.234792 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:15.234812 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 15:14:15.234832 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 15:14:15.234850 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 15:14:15.234871 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-05-19 15:14:15.234891 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 15:14:15.234911 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 15:14:15.234952 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 15:14:15.235000 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 15:14:15.235045 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 15:14:15.235066 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 15:14:15.235084 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 15:14:15.235102 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 15:14:15.235120 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 15:14:15.235138 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 15:14:15.235156 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 15:14:15.235174 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T15:11:50.000000 | 2025-05-19 15:14:15.235194 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 15:14:15.235224 | orchestrator | | accessIPv4 | | 2025-05-19 15:14:15.235242 | orchestrator | | accessIPv6 | | 2025-05-19 15:14:15.235260 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.117 | 2025-05-19 15:14:15.235287 | orchestrator | | config_drive | | 2025-05-19 15:14:15.235306 | orchestrator | | created | 2025-05-19T15:11:34Z | 2025-05-19 15:14:15.235333 | orchestrator | | description | None | 2025-05-19 15:14:15.235349 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 15:14:15.235367 | orchestrator | | hostId | 72df7740f40f5670f2940f0af2d8abc02d548cab092e910af59e7eda | 2025-05-19 15:14:15.235385 | orchestrator | | host_status | None | 2025-05-19 15:14:15.235403 | orchestrator | | id | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | 2025-05-19 15:14:15.235421 | orchestrator | | image | Cirros 0.6.2 (518bd81c-7d7f-4641-9c88-4eb559358b31) | 2025-05-19 15:14:15.235447 | orchestrator | | key_name | test | 2025-05-19 15:14:15.235472 | orchestrator | | locked | False | 2025-05-19 15:14:15.235529 | orchestrator | | locked_reason | None | 2025-05-19 15:14:15.235548 | orchestrator | | name | test-3 | 2025-05-19 15:14:15.235575 | orchestrator | | pinned_availability_zone | None | 2025-05-19 15:14:15.235594 | orchestrator | | progress | 0 | 2025-05-19 15:14:15.235611 | orchestrator | | project_id | 06d650c252a647df950e244c5e2c3934 | 2025-05-19 15:14:15.235628 | orchestrator | | properties | hostname='test-3' | 2025-05-19 15:14:15.235645 | orchestrator | | security_groups | name='ssh' | 2025-05-19 15:14:15.235663 | orchestrator | | | name='icmp' | 2025-05-19 15:14:15.235681 | orchestrator | | server_groups | None | 2025-05-19 15:14:15.235711 | orchestrator | | status | ACTIVE | 2025-05-19 15:14:15.235728 | orchestrator | | tags | test | 2025-05-19 15:14:15.235752 | orchestrator | | trusted_image_certificates | None | 2025-05-19 15:14:15.235768 | orchestrator | | updated | 2025-05-19T15:12:57Z | 2025-05-19 15:14:15.235794 | orchestrator | | user_id | 002d3f56089b495f8f53df236ccd0e4d | 2025-05-19 15:14:15.235811 | orchestrator | | volumes_attached | | 2025-05-19 15:14:15.238139 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:15.482842 | orchestrator | + openstack --os-cloud test server show test-4 2025-05-19 15:14:18.504967 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:18.505082 | orchestrator | | Field | Value | 2025-05-19 15:14:18.505098 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:18.505110 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 15:14:18.505151 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 15:14:18.505172 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 15:14:18.505210 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-05-19 15:14:18.505233 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 15:14:18.505246 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 15:14:18.505257 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 15:14:18.505268 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 15:14:18.505298 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 15:14:18.505310 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 15:14:18.505321 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 15:14:18.505341 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 15:14:18.505352 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 15:14:18.505363 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 15:14:18.505374 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 15:14:18.505390 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T15:12:28.000000 | 2025-05-19 15:14:18.505401 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 15:14:18.505412 | orchestrator | | accessIPv4 | | 2025-05-19 15:14:18.505423 | orchestrator | | accessIPv6 | | 2025-05-19 15:14:18.505434 | orchestrator | | addresses | auto_allocated_network=10.42.0.31, 192.168.112.148 | 2025-05-19 15:14:18.505452 | orchestrator | | config_drive | | 2025-05-19 15:14:18.505464 | orchestrator | | created | 2025-05-19T15:12:12Z | 2025-05-19 15:14:18.505532 | orchestrator | | description | None | 2025-05-19 15:14:18.505549 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 15:14:18.505561 | orchestrator | | hostId | ed8ac4f430f263a1e2ca0fadd11b4c0fdce024c1f7e4fb119c0d511f | 2025-05-19 15:14:18.505574 | orchestrator | | host_status | None | 2025-05-19 15:14:18.505586 | orchestrator | | id | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | 2025-05-19 15:14:18.505604 | orchestrator | | image | Cirros 0.6.2 (518bd81c-7d7f-4641-9c88-4eb559358b31) | 2025-05-19 15:14:18.505617 | orchestrator | | key_name | test | 2025-05-19 15:14:18.505629 | orchestrator | | locked | False | 2025-05-19 15:14:18.505641 | orchestrator | | locked_reason | None | 2025-05-19 15:14:18.505654 | orchestrator | | name | test-4 | 2025-05-19 15:14:18.505674 | orchestrator | | pinned_availability_zone | None | 2025-05-19 15:14:18.505694 | orchestrator | | progress | 0 | 2025-05-19 15:14:18.505707 | orchestrator | | project_id | 06d650c252a647df950e244c5e2c3934 | 2025-05-19 15:14:18.505719 | orchestrator | | properties | hostname='test-4' | 2025-05-19 15:14:18.505732 | orchestrator | | security_groups | name='ssh' | 2025-05-19 15:14:18.505744 | orchestrator | | | name='icmp' | 2025-05-19 15:14:18.505761 | orchestrator | | server_groups | None | 2025-05-19 15:14:18.505775 | orchestrator | | status | ACTIVE | 2025-05-19 15:14:18.505787 | orchestrator | | tags | test | 2025-05-19 15:14:18.505799 | orchestrator | | trusted_image_certificates | None | 2025-05-19 15:14:18.505811 | orchestrator | | updated | 2025-05-19T15:13:01Z | 2025-05-19 15:14:18.505830 | orchestrator | | user_id | 002d3f56089b495f8f53df236ccd0e4d | 2025-05-19 15:14:18.505849 | orchestrator | | volumes_attached | | 2025-05-19 15:14:18.509430 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 15:14:18.737908 | orchestrator | + server_ping 2025-05-19 15:14:18.738400 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 15:14:18.738581 | orchestrator | ++ tr -d '\r' 2025-05-19 15:14:21.684877 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:14:21.684987 | orchestrator | + ping -c3 192.168.112.115 2025-05-19 15:14:21.696193 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-05-19 15:14:21.696271 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=5.31 ms 2025-05-19 15:14:22.695053 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.61 ms 2025-05-19 15:14:23.696407 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=2.06 ms 2025-05-19 15:14:23.696582 | orchestrator | 2025-05-19 15:14:23.696601 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-05-19 15:14:23.696614 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:14:23.696626 | orchestrator | rtt min/avg/max/mdev = 2.064/3.326/5.310/1.419 ms 2025-05-19 15:14:23.697465 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:14:23.697490 | orchestrator | + ping -c3 192.168.112.148 2025-05-19 15:14:23.707522 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2025-05-19 15:14:23.707578 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=5.97 ms 2025-05-19 15:14:24.706478 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.71 ms 2025-05-19 15:14:25.707790 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.86 ms 2025-05-19 15:14:25.707894 | orchestrator | 2025-05-19 15:14:25.707911 | orchestrator | --- 192.168.112.148 ping statistics --- 2025-05-19 15:14:25.707924 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:14:25.707935 | orchestrator | rtt min/avg/max/mdev = 1.857/3.512/5.967/1.770 ms 2025-05-19 15:14:25.708018 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:14:25.708034 | orchestrator | + ping -c3 192.168.112.176 2025-05-19 15:14:25.721488 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-05-19 15:14:25.721612 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=9.23 ms 2025-05-19 15:14:26.716450 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.56 ms 2025-05-19 15:14:27.718491 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.05 ms 2025-05-19 15:14:27.718652 | orchestrator | 2025-05-19 15:14:27.718668 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-05-19 15:14:27.718680 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:14:27.718691 | orchestrator | rtt min/avg/max/mdev = 2.048/4.611/9.225/3.268 ms 2025-05-19 15:14:27.718703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:14:27.718714 | orchestrator | + ping -c3 192.168.112.117 2025-05-19 15:14:27.728908 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-05-19 15:14:27.728971 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.05 ms 2025-05-19 15:14:28.726813 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.59 ms 2025-05-19 15:14:29.728920 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.70 ms 2025-05-19 15:14:29.729021 | orchestrator | 2025-05-19 15:14:29.729036 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-05-19 15:14:29.729049 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:14:29.729060 | orchestrator | rtt min/avg/max/mdev = 1.699/3.446/6.053/1.878 ms 2025-05-19 15:14:29.729071 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:14:29.729083 | orchestrator | + ping -c3 192.168.112.177 2025-05-19 15:14:29.738942 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-05-19 15:14:29.739024 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=6.50 ms 2025-05-19 15:14:30.736471 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.63 ms 2025-05-19 15:14:31.737961 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.24 ms 2025-05-19 15:14:31.738115 | orchestrator | 2025-05-19 15:14:31.738133 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-05-19 15:14:31.738146 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:14:31.738211 | orchestrator | rtt min/avg/max/mdev = 2.235/3.787/6.498/1.923 ms 2025-05-19 15:14:31.738366 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 15:14:31.738386 | orchestrator | + compute_list 2025-05-19 15:14:31.738397 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 15:14:35.103591 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:35.103704 | orchestrator | | ID | Name | Status | 2025-05-19 15:14:35.103718 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:14:35.104182 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | 2025-05-19 15:14:35.104201 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | 2025-05-19 15:14:35.104213 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:35.412230 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 15:14:38.438706 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:38.438822 | orchestrator | | ID | Name | Status | 2025-05-19 15:14:38.438837 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:14:38.438849 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | 2025-05-19 15:14:38.438860 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | 2025-05-19 15:14:38.438870 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:38.657697 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 15:14:41.668055 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:41.668158 | orchestrator | | ID | Name | Status | 2025-05-19 15:14:41.668168 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:14:41.668177 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | 2025-05-19 15:14:41.668186 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:14:41.981958 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-05-19 15:14:44.971523 | orchestrator | 2025-05-19 15:14:44 | INFO  | Live migrating server 630544bf-ec8b-4c67-95a1-bd72c43dbe21 2025-05-19 15:14:58.940774 | orchestrator | 2025-05-19 15:14:58 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:01.314356 | orchestrator | 2025-05-19 15:15:01 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:03.753312 | orchestrator | 2025-05-19 15:15:03 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:06.265127 | orchestrator | 2025-05-19 15:15:06 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:08.916617 | orchestrator | 2025-05-19 15:15:08 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:11.197785 | orchestrator | 2025-05-19 15:15:11 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:13.638506 | orchestrator | 2025-05-19 15:15:13 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:15:16.037025 | orchestrator | 2025-05-19 15:15:16 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) completed with status ACTIVE 2025-05-19 15:15:16.037110 | orchestrator | 2025-05-19 15:15:16 | INFO  | Live migrating server b8b086f5-c14b-4b32-9177-e7b68df1f7c5 2025-05-19 15:15:28.433916 | orchestrator | 2025-05-19 15:15:28 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:30.778331 | orchestrator | 2025-05-19 15:15:30 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:33.242533 | orchestrator | 2025-05-19 15:15:33 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:35.534707 | orchestrator | 2025-05-19 15:15:35 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:37.803872 | orchestrator | 2025-05-19 15:15:37 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:40.365051 | orchestrator | 2025-05-19 15:15:40 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:42.708540 | orchestrator | 2025-05-19 15:15:42 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:45.182445 | orchestrator | 2025-05-19 15:15:45 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:47.482466 | orchestrator | 2025-05-19 15:15:47 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:15:49.841091 | orchestrator | 2025-05-19 15:15:49 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) completed with status ACTIVE 2025-05-19 15:15:50.072543 | orchestrator | + compute_list 2025-05-19 15:15:50.072732 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 15:15:53.138256 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:15:53.138368 | orchestrator | | ID | Name | Status | 2025-05-19 15:15:53.138385 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:15:53.138398 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | 2025-05-19 15:15:53.138409 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | 2025-05-19 15:15:53.138421 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | 2025-05-19 15:15:53.138432 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | 2025-05-19 15:15:53.138443 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:15:53.358199 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 15:15:55.896129 | orchestrator | +------+--------+----------+ 2025-05-19 15:15:55.896240 | orchestrator | | ID | Name | Status | 2025-05-19 15:15:55.896255 | orchestrator | |------+--------+----------| 2025-05-19 15:15:55.896267 | orchestrator | +------+--------+----------+ 2025-05-19 15:15:56.147629 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 15:15:58.911863 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:15:58.911974 | orchestrator | | ID | Name | Status | 2025-05-19 15:15:58.911990 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:15:58.912030 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | 2025-05-19 15:15:58.912041 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:15:59.159703 | orchestrator | + server_ping 2025-05-19 15:15:59.160355 | orchestrator | ++ tr -d '\r' 2025-05-19 15:15:59.160388 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 15:16:01.915380 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:01.915518 | orchestrator | + ping -c3 192.168.112.115 2025-05-19 15:16:01.926118 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-05-19 15:16:01.926198 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=9.11 ms 2025-05-19 15:16:02.921166 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.81 ms 2025-05-19 15:16:03.923056 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.93 ms 2025-05-19 15:16:03.923163 | orchestrator | 2025-05-19 15:16:03.923198 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-05-19 15:16:03.923211 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 15:16:03.923222 | orchestrator | rtt min/avg/max/mdev = 1.925/4.617/9.113/3.199 ms 2025-05-19 15:16:03.923233 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:03.923244 | orchestrator | + ping -c3 192.168.112.148 2025-05-19 15:16:03.934525 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2025-05-19 15:16:03.934607 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=8.69 ms 2025-05-19 15:16:04.930290 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=3.00 ms 2025-05-19 15:16:05.930968 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.89 ms 2025-05-19 15:16:05.931074 | orchestrator | 2025-05-19 15:16:05.931090 | orchestrator | --- 192.168.112.148 ping statistics --- 2025-05-19 15:16:05.931103 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:16:05.931115 | orchestrator | rtt min/avg/max/mdev = 1.891/4.526/8.686/2.975 ms 2025-05-19 15:16:05.931257 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:05.931274 | orchestrator | + ping -c3 192.168.112.176 2025-05-19 15:16:05.945826 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-05-19 15:16:05.945916 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=9.74 ms 2025-05-19 15:16:06.941362 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.92 ms 2025-05-19 15:16:07.942495 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.27 ms 2025-05-19 15:16:07.942598 | orchestrator | 2025-05-19 15:16:07.942614 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-05-19 15:16:07.942626 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:16:07.942638 | orchestrator | rtt min/avg/max/mdev = 2.273/4.977/9.743/3.380 ms 2025-05-19 15:16:07.942711 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:07.942726 | orchestrator | + ping -c3 192.168.112.117 2025-05-19 15:16:07.953762 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-05-19 15:16:07.953820 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.37 ms 2025-05-19 15:16:08.952151 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.43 ms 2025-05-19 15:16:09.953713 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.04 ms 2025-05-19 15:16:09.953836 | orchestrator | 2025-05-19 15:16:09.953855 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-05-19 15:16:09.953868 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 15:16:09.953879 | orchestrator | rtt min/avg/max/mdev = 2.040/3.614/6.371/1.956 ms 2025-05-19 15:16:09.954432 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:09.954464 | orchestrator | + ping -c3 192.168.112.177 2025-05-19 15:16:09.967248 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-05-19 15:16:09.967306 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.29 ms 2025-05-19 15:16:10.962835 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.43 ms 2025-05-19 15:16:11.963905 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.93 ms 2025-05-19 15:16:11.964014 | orchestrator | 2025-05-19 15:16:11.964032 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-05-19 15:16:11.964045 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 15:16:11.964057 | orchestrator | rtt min/avg/max/mdev = 1.932/4.216/8.289/2.886 ms 2025-05-19 15:16:11.964153 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-05-19 15:16:14.829723 | orchestrator | 2025-05-19 15:16:14 | INFO  | Live migrating server 283d4ae5-cac8-4547-9575-80bc63830c83 2025-05-19 15:16:25.695054 | orchestrator | 2025-05-19 15:16:25 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:28.049045 | orchestrator | 2025-05-19 15:16:28 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:30.376270 | orchestrator | 2025-05-19 15:16:30 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:32.736604 | orchestrator | 2025-05-19 15:16:32 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:34.987801 | orchestrator | 2025-05-19 15:16:34 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:37.334245 | orchestrator | 2025-05-19 15:16:37 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:39.609485 | orchestrator | 2025-05-19 15:16:39 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:16:41.950358 | orchestrator | 2025-05-19 15:16:41 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) completed with status ACTIVE 2025-05-19 15:16:42.183731 | orchestrator | + compute_list 2025-05-19 15:16:42.183830 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 15:16:45.158194 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:16:45.158304 | orchestrator | | ID | Name | Status | 2025-05-19 15:16:45.158316 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:16:45.158326 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | 2025-05-19 15:16:45.158335 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | 2025-05-19 15:16:45.158361 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | 2025-05-19 15:16:45.159090 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | 2025-05-19 15:16:45.159107 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | 2025-05-19 15:16:45.159117 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:16:45.404854 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 15:16:47.886915 | orchestrator | +------+--------+----------+ 2025-05-19 15:16:47.887035 | orchestrator | | ID | Name | Status | 2025-05-19 15:16:47.887051 | orchestrator | |------+--------+----------| 2025-05-19 15:16:47.887064 | orchestrator | +------+--------+----------+ 2025-05-19 15:16:48.117925 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 15:16:50.524026 | orchestrator | +------+--------+----------+ 2025-05-19 15:16:50.524161 | orchestrator | | ID | Name | Status | 2025-05-19 15:16:50.524177 | orchestrator | |------+--------+----------| 2025-05-19 15:16:50.524189 | orchestrator | +------+--------+----------+ 2025-05-19 15:16:50.774437 | orchestrator | + server_ping 2025-05-19 15:16:50.775458 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 15:16:50.775498 | orchestrator | ++ tr -d '\r' 2025-05-19 15:16:53.476968 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:53.477101 | orchestrator | + ping -c3 192.168.112.115 2025-05-19 15:16:53.488683 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-05-19 15:16:53.488790 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=7.99 ms 2025-05-19 15:16:54.484165 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.93 ms 2025-05-19 15:16:55.484301 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.82 ms 2025-05-19 15:16:55.484408 | orchestrator | 2025-05-19 15:16:55.484424 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-05-19 15:16:55.484436 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-05-19 15:16:55.484447 | orchestrator | rtt min/avg/max/mdev = 1.817/4.248/7.994/2.687 ms 2025-05-19 15:16:55.484853 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:55.484880 | orchestrator | + ping -c3 192.168.112.148 2025-05-19 15:16:55.496053 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2025-05-19 15:16:55.496095 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=5.78 ms 2025-05-19 15:16:56.495769 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.98 ms 2025-05-19 15:16:57.496129 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=2.21 ms 2025-05-19 15:16:57.496244 | orchestrator | 2025-05-19 15:16:57.496260 | orchestrator | --- 192.168.112.148 ping statistics --- 2025-05-19 15:16:57.496273 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:16:57.496284 | orchestrator | rtt min/avg/max/mdev = 2.205/3.652/5.777/1.534 ms 2025-05-19 15:16:57.496605 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:57.496633 | orchestrator | + ping -c3 192.168.112.176 2025-05-19 15:16:57.509623 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-05-19 15:16:57.509674 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=8.57 ms 2025-05-19 15:16:58.505302 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.19 ms 2025-05-19 15:16:59.506467 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=1.88 ms 2025-05-19 15:16:59.506575 | orchestrator | 2025-05-19 15:16:59.506591 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-05-19 15:16:59.506604 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:16:59.506617 | orchestrator | rtt min/avg/max/mdev = 1.884/4.213/8.570/3.083 ms 2025-05-19 15:16:59.507001 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:16:59.507034 | orchestrator | + ping -c3 192.168.112.117 2025-05-19 15:16:59.517448 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-05-19 15:16:59.517489 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.73 ms 2025-05-19 15:17:00.513756 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.72 ms 2025-05-19 15:17:01.515957 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.60 ms 2025-05-19 15:17:01.516151 | orchestrator | 2025-05-19 15:17:01.516834 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-05-19 15:17:01.516856 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 15:17:01.516868 | orchestrator | rtt min/avg/max/mdev = 1.604/3.017/5.731/1.919 ms 2025-05-19 15:17:01.516893 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:17:01.516907 | orchestrator | + ping -c3 192.168.112.177 2025-05-19 15:17:01.525996 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-05-19 15:17:01.526086 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=5.19 ms 2025-05-19 15:17:02.525028 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.30 ms 2025-05-19 15:17:03.525973 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.86 ms 2025-05-19 15:17:03.526132 | orchestrator | 2025-05-19 15:17:03.526151 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-05-19 15:17:03.526164 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:17:03.526176 | orchestrator | rtt min/avg/max/mdev = 1.860/3.118/5.193/1.478 ms 2025-05-19 15:17:03.527039 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-05-19 15:17:06.612201 | orchestrator | 2025-05-19 15:17:06 | INFO  | Live migrating server 630544bf-ec8b-4c67-95a1-bd72c43dbe21 2025-05-19 15:17:17.257615 | orchestrator | 2025-05-19 15:17:17 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:19.595405 | orchestrator | 2025-05-19 15:17:19 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:21.906190 | orchestrator | 2025-05-19 15:17:21 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:24.249524 | orchestrator | 2025-05-19 15:17:24 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:26.611510 | orchestrator | 2025-05-19 15:17:26 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:28.949263 | orchestrator | 2025-05-19 15:17:28 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:31.303414 | orchestrator | 2025-05-19 15:17:31 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:33.689968 | orchestrator | 2025-05-19 15:17:33 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:17:36.080492 | orchestrator | 2025-05-19 15:17:36 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) completed with status ACTIVE 2025-05-19 15:17:36.080578 | orchestrator | 2025-05-19 15:17:36 | INFO  | Live migrating server caaf9de9-b08d-4288-9d17-37ce2bd2ee1b 2025-05-19 15:17:48.381307 | orchestrator | 2025-05-19 15:17:48 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:17:50.699215 | orchestrator | 2025-05-19 15:17:50 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:17:53.221108 | orchestrator | 2025-05-19 15:17:53 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:17:55.488486 | orchestrator | 2025-05-19 15:17:55 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:17:57.759081 | orchestrator | 2025-05-19 15:17:57 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:18:00.049865 | orchestrator | 2025-05-19 15:18:00 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:18:02.367770 | orchestrator | 2025-05-19 15:18:02 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:18:04.679401 | orchestrator | 2025-05-19 15:18:04 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) completed with status ACTIVE 2025-05-19 15:18:04.679495 | orchestrator | 2025-05-19 15:18:04 | INFO  | Live migrating server 33055056-a68f-4dff-b938-428c5f68477e 2025-05-19 15:18:15.762844 | orchestrator | 2025-05-19 15:18:15 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:18.118398 | orchestrator | 2025-05-19 15:18:18 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:20.431978 | orchestrator | 2025-05-19 15:18:20 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:22.736077 | orchestrator | 2025-05-19 15:18:22 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:25.013450 | orchestrator | 2025-05-19 15:18:25 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:27.273291 | orchestrator | 2025-05-19 15:18:27 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:29.628066 | orchestrator | 2025-05-19 15:18:29 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:31.987087 | orchestrator | 2025-05-19 15:18:31 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:34.363132 | orchestrator | 2025-05-19 15:18:34 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:18:36.707162 | orchestrator | 2025-05-19 15:18:36 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) completed with status ACTIVE 2025-05-19 15:18:36.707270 | orchestrator | 2025-05-19 15:18:36 | INFO  | Live migrating server 283d4ae5-cac8-4547-9575-80bc63830c83 2025-05-19 15:18:47.866303 | orchestrator | 2025-05-19 15:18:47 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:18:50.211239 | orchestrator | 2025-05-19 15:18:50 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:18:52.583524 | orchestrator | 2025-05-19 15:18:52 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:18:54.949207 | orchestrator | 2025-05-19 15:18:54 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:18:57.289340 | orchestrator | 2025-05-19 15:18:57 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:18:59.633076 | orchestrator | 2025-05-19 15:18:59 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:19:01.983803 | orchestrator | 2025-05-19 15:19:01 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:19:04.309051 | orchestrator | 2025-05-19 15:19:04 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) completed with status ACTIVE 2025-05-19 15:19:04.309174 | orchestrator | 2025-05-19 15:19:04 | INFO  | Live migrating server b8b086f5-c14b-4b32-9177-e7b68df1f7c5 2025-05-19 15:19:15.416202 | orchestrator | 2025-05-19 15:19:15 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:17.731402 | orchestrator | 2025-05-19 15:19:17 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:20.273942 | orchestrator | 2025-05-19 15:19:20 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:22.635108 | orchestrator | 2025-05-19 15:19:22 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:24.929357 | orchestrator | 2025-05-19 15:19:24 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:27.221302 | orchestrator | 2025-05-19 15:19:27 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:29.487590 | orchestrator | 2025-05-19 15:19:29 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:31.793691 | orchestrator | 2025-05-19 15:19:31 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:19:34.095151 | orchestrator | 2025-05-19 15:19:34 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) completed with status ACTIVE 2025-05-19 15:19:34.401965 | orchestrator | + compute_list 2025-05-19 15:19:34.402124 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 15:19:36.956055 | orchestrator | +------+--------+----------+ 2025-05-19 15:19:36.956162 | orchestrator | | ID | Name | Status | 2025-05-19 15:19:36.956176 | orchestrator | |------+--------+----------| 2025-05-19 15:19:36.956188 | orchestrator | +------+--------+----------+ 2025-05-19 15:19:37.194385 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 15:19:40.109751 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:19:40.109938 | orchestrator | | ID | Name | Status | 2025-05-19 15:19:40.109967 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:19:40.109981 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | 2025-05-19 15:19:40.109992 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | 2025-05-19 15:19:40.110003 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | 2025-05-19 15:19:40.110087 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | 2025-05-19 15:19:40.110109 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | 2025-05-19 15:19:40.110126 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:19:40.331153 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 15:19:42.931376 | orchestrator | +------+--------+----------+ 2025-05-19 15:19:42.931548 | orchestrator | | ID | Name | Status | 2025-05-19 15:19:42.931563 | orchestrator | |------+--------+----------| 2025-05-19 15:19:42.931574 | orchestrator | +------+--------+----------+ 2025-05-19 15:19:43.204362 | orchestrator | + server_ping 2025-05-19 15:19:43.205304 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 15:19:43.205848 | orchestrator | ++ tr -d '\r' 2025-05-19 15:19:46.150408 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:19:46.150551 | orchestrator | + ping -c3 192.168.112.115 2025-05-19 15:19:46.167575 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-05-19 15:19:46.167655 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=13.9 ms 2025-05-19 15:19:47.158092 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.86 ms 2025-05-19 15:19:48.160068 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=2.72 ms 2025-05-19 15:19:48.160173 | orchestrator | 2025-05-19 15:19:48.160189 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-05-19 15:19:48.160201 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 15:19:48.160213 | orchestrator | rtt min/avg/max/mdev = 2.723/6.508/13.940/5.255 ms 2025-05-19 15:19:48.160257 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:19:48.160272 | orchestrator | + ping -c3 192.168.112.148 2025-05-19 15:19:48.173326 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2025-05-19 15:19:48.173434 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=8.46 ms 2025-05-19 15:19:49.169242 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.50 ms 2025-05-19 15:19:50.170841 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.98 ms 2025-05-19 15:19:50.171028 | orchestrator | 2025-05-19 15:19:50.171047 | orchestrator | --- 192.168.112.148 ping statistics --- 2025-05-19 15:19:50.171060 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:19:50.171071 | orchestrator | rtt min/avg/max/mdev = 1.975/4.311/8.463/2.943 ms 2025-05-19 15:19:50.171123 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:19:50.171137 | orchestrator | + ping -c3 192.168.112.176 2025-05-19 15:19:50.184022 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-05-19 15:19:50.184093 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=8.09 ms 2025-05-19 15:19:51.180058 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.73 ms 2025-05-19 15:19:52.181973 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.21 ms 2025-05-19 15:19:52.182138 | orchestrator | 2025-05-19 15:19:52.182155 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-05-19 15:19:52.182168 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:19:52.182179 | orchestrator | rtt min/avg/max/mdev = 2.206/4.341/8.091/2.659 ms 2025-05-19 15:19:52.182569 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:19:52.182597 | orchestrator | + ping -c3 192.168.112.117 2025-05-19 15:19:52.194089 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-05-19 15:19:52.194143 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=7.47 ms 2025-05-19 15:19:53.190998 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.46 ms 2025-05-19 15:19:54.192964 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.83 ms 2025-05-19 15:19:54.193069 | orchestrator | 2025-05-19 15:19:54.193085 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-05-19 15:19:54.193099 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:19:54.193110 | orchestrator | rtt min/avg/max/mdev = 1.833/3.920/7.469/2.522 ms 2025-05-19 15:19:54.193171 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:19:54.193186 | orchestrator | + ping -c3 192.168.112.177 2025-05-19 15:19:54.203876 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-05-19 15:19:54.203951 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=6.35 ms 2025-05-19 15:19:55.203780 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.39 ms 2025-05-19 15:19:56.202203 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.85 ms 2025-05-19 15:19:56.203346 | orchestrator | 2025-05-19 15:19:56.203395 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-05-19 15:19:56.203409 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 15:19:56.203421 | orchestrator | rtt min/avg/max/mdev = 1.853/3.531/6.349/2.004 ms 2025-05-19 15:19:56.203448 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-05-19 15:19:59.353270 | orchestrator | 2025-05-19 15:19:59 | INFO  | Live migrating server 630544bf-ec8b-4c67-95a1-bd72c43dbe21 2025-05-19 15:20:10.515121 | orchestrator | 2025-05-19 15:20:10 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:13.026128 | orchestrator | 2025-05-19 15:20:13 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:15.503204 | orchestrator | 2025-05-19 15:20:15 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:17.831873 | orchestrator | 2025-05-19 15:20:17 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:20.100140 | orchestrator | 2025-05-19 15:20:20 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:22.362320 | orchestrator | 2025-05-19 15:20:22 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) is still in progress 2025-05-19 15:20:24.669121 | orchestrator | 2025-05-19 15:20:24 | INFO  | Live migration of 630544bf-ec8b-4c67-95a1-bd72c43dbe21 (test-4) completed with status ACTIVE 2025-05-19 15:20:24.669200 | orchestrator | 2025-05-19 15:20:24 | INFO  | Live migrating server caaf9de9-b08d-4288-9d17-37ce2bd2ee1b 2025-05-19 15:20:36.111663 | orchestrator | 2025-05-19 15:20:36 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:38.470302 | orchestrator | 2025-05-19 15:20:38 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:40.847433 | orchestrator | 2025-05-19 15:20:40 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:43.170486 | orchestrator | 2025-05-19 15:20:43 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:45.625890 | orchestrator | 2025-05-19 15:20:45 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:47.920692 | orchestrator | 2025-05-19 15:20:47 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:50.174514 | orchestrator | 2025-05-19 15:20:50 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) is still in progress 2025-05-19 15:20:52.471265 | orchestrator | 2025-05-19 15:20:52 | INFO  | Live migration of caaf9de9-b08d-4288-9d17-37ce2bd2ee1b (test-3) completed with status ACTIVE 2025-05-19 15:20:52.471373 | orchestrator | 2025-05-19 15:20:52 | INFO  | Live migrating server 33055056-a68f-4dff-b938-428c5f68477e 2025-05-19 15:21:02.368705 | orchestrator | 2025-05-19 15:21:02 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:04.720233 | orchestrator | 2025-05-19 15:21:04 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:06.963501 | orchestrator | 2025-05-19 15:21:06 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:09.358453 | orchestrator | 2025-05-19 15:21:09 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:11.723292 | orchestrator | 2025-05-19 15:21:11 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:14.035062 | orchestrator | 2025-05-19 15:21:14 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:16.422861 | orchestrator | 2025-05-19 15:21:16 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) is still in progress 2025-05-19 15:21:18.671047 | orchestrator | 2025-05-19 15:21:18 | INFO  | Live migration of 33055056-a68f-4dff-b938-428c5f68477e (test-2) completed with status ACTIVE 2025-05-19 15:21:18.671149 | orchestrator | 2025-05-19 15:21:18 | INFO  | Live migrating server 283d4ae5-cac8-4547-9575-80bc63830c83 2025-05-19 15:21:28.968804 | orchestrator | 2025-05-19 15:21:28 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:31.351900 | orchestrator | 2025-05-19 15:21:31 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:33.716925 | orchestrator | 2025-05-19 15:21:33 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:36.012492 | orchestrator | 2025-05-19 15:21:36 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:38.296445 | orchestrator | 2025-05-19 15:21:38 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:40.644096 | orchestrator | 2025-05-19 15:21:40 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:42.990762 | orchestrator | 2025-05-19 15:21:42 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) is still in progress 2025-05-19 15:21:45.259078 | orchestrator | 2025-05-19 15:21:45 | INFO  | Live migration of 283d4ae5-cac8-4547-9575-80bc63830c83 (test-1) completed with status ACTIVE 2025-05-19 15:21:45.259192 | orchestrator | 2025-05-19 15:21:45 | INFO  | Live migrating server b8b086f5-c14b-4b32-9177-e7b68df1f7c5 2025-05-19 15:21:55.457200 | orchestrator | 2025-05-19 15:21:55 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:21:57.807398 | orchestrator | 2025-05-19 15:21:57 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:00.173699 | orchestrator | 2025-05-19 15:22:00 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:02.559904 | orchestrator | 2025-05-19 15:22:02 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:04.831471 | orchestrator | 2025-05-19 15:22:04 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:07.116607 | orchestrator | 2025-05-19 15:22:07 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:09.455536 | orchestrator | 2025-05-19 15:22:09 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:11.822449 | orchestrator | 2025-05-19 15:22:11 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:14.142893 | orchestrator | 2025-05-19 15:22:14 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) is still in progress 2025-05-19 15:22:16.414506 | orchestrator | 2025-05-19 15:22:16 | INFO  | Live migration of b8b086f5-c14b-4b32-9177-e7b68df1f7c5 (test) completed with status ACTIVE 2025-05-19 15:22:16.636585 | orchestrator | + compute_list 2025-05-19 15:22:16.636686 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 15:22:19.594739 | orchestrator | +------+--------+----------+ 2025-05-19 15:22:19.594857 | orchestrator | | ID | Name | Status | 2025-05-19 15:22:19.594872 | orchestrator | |------+--------+----------| 2025-05-19 15:22:19.594884 | orchestrator | +------+--------+----------+ 2025-05-19 15:22:19.926712 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 15:22:22.442461 | orchestrator | +------+--------+----------+ 2025-05-19 15:22:22.442540 | orchestrator | | ID | Name | Status | 2025-05-19 15:22:22.442548 | orchestrator | |------+--------+----------| 2025-05-19 15:22:22.442553 | orchestrator | +------+--------+----------+ 2025-05-19 15:22:22.828601 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 15:22:25.837121 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:22:25.837261 | orchestrator | | ID | Name | Status | 2025-05-19 15:22:25.837278 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 15:22:25.837289 | orchestrator | | 630544bf-ec8b-4c67-95a1-bd72c43dbe21 | test-4 | ACTIVE | 2025-05-19 15:22:25.837300 | orchestrator | | caaf9de9-b08d-4288-9d17-37ce2bd2ee1b | test-3 | ACTIVE | 2025-05-19 15:22:25.837311 | orchestrator | | 33055056-a68f-4dff-b938-428c5f68477e | test-2 | ACTIVE | 2025-05-19 15:22:25.837322 | orchestrator | | 283d4ae5-cac8-4547-9575-80bc63830c83 | test-1 | ACTIVE | 2025-05-19 15:22:25.837333 | orchestrator | | b8b086f5-c14b-4b32-9177-e7b68df1f7c5 | test | ACTIVE | 2025-05-19 15:22:25.837343 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 15:22:26.080776 | orchestrator | + server_ping 2025-05-19 15:22:26.084414 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 15:22:26.084477 | orchestrator | ++ tr -d '\r' 2025-05-19 15:22:29.080056 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:22:29.080162 | orchestrator | + ping -c3 192.168.112.115 2025-05-19 15:22:29.098657 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-05-19 15:22:29.120039 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=13.6 ms 2025-05-19 15:22:30.089760 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.60 ms 2025-05-19 15:22:31.090636 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=2.03 ms 2025-05-19 15:22:31.090778 | orchestrator | 2025-05-19 15:22:31.090796 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-05-19 15:22:31.090809 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:22:31.090820 | orchestrator | rtt min/avg/max/mdev = 2.027/6.069/13.585/5.319 ms 2025-05-19 15:22:31.091741 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:22:31.091776 | orchestrator | + ping -c3 192.168.112.148 2025-05-19 15:22:31.105225 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2025-05-19 15:22:31.105306 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=6.04 ms 2025-05-19 15:22:32.103403 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.67 ms 2025-05-19 15:22:33.107666 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=2.33 ms 2025-05-19 15:22:33.107769 | orchestrator | 2025-05-19 15:22:33.107785 | orchestrator | --- 192.168.112.148 ping statistics --- 2025-05-19 15:22:33.107798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:22:33.107809 | orchestrator | rtt min/avg/max/mdev = 2.334/3.679/6.039/1.673 ms 2025-05-19 15:22:33.107850 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:22:33.107864 | orchestrator | + ping -c3 192.168.112.176 2025-05-19 15:22:33.118432 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-05-19 15:22:33.118493 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=8.45 ms 2025-05-19 15:22:34.114819 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=3.08 ms 2025-05-19 15:22:35.115694 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=1.84 ms 2025-05-19 15:22:35.115799 | orchestrator | 2025-05-19 15:22:35.115815 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-05-19 15:22:35.115827 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:22:35.115838 | orchestrator | rtt min/avg/max/mdev = 1.838/4.456/8.448/2.867 ms 2025-05-19 15:22:35.115937 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:22:35.115953 | orchestrator | + ping -c3 192.168.112.117 2025-05-19 15:22:35.123881 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-05-19 15:22:35.123976 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=4.74 ms 2025-05-19 15:22:36.123820 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.42 ms 2025-05-19 15:22:37.124564 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.79 ms 2025-05-19 15:22:37.124673 | orchestrator | 2025-05-19 15:22:37.124690 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-05-19 15:22:37.124703 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 15:22:37.124714 | orchestrator | rtt min/avg/max/mdev = 1.791/2.983/4.737/1.266 ms 2025-05-19 15:22:37.125309 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 15:22:37.125334 | orchestrator | + ping -c3 192.168.112.177 2025-05-19 15:22:37.133545 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-05-19 15:22:37.133600 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=5.50 ms 2025-05-19 15:22:38.133929 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=3.04 ms 2025-05-19 15:22:39.135704 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.86 ms 2025-05-19 15:22:39.135822 | orchestrator | 2025-05-19 15:22:39.135838 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-05-19 15:22:39.135850 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 15:22:39.135882 | orchestrator | rtt min/avg/max/mdev = 2.856/3.797/5.499/1.205 ms 2025-05-19 15:22:39.371713 | orchestrator | ok: Runtime: 0:17:53.177714 2025-05-19 15:22:39.432174 | 2025-05-19 15:22:39.432362 | TASK [Run tempest] 2025-05-19 15:22:39.981122 | orchestrator | skipping: Conditional result was False 2025-05-19 15:22:39.998777 | 2025-05-19 15:22:39.998989 | TASK [Check prometheus alert status] 2025-05-19 15:22:40.534964 | orchestrator | skipping: Conditional result was False 2025-05-19 15:22:40.538086 | 2025-05-19 15:22:40.538253 | PLAY RECAP 2025-05-19 15:22:40.538396 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-05-19 15:22:40.538459 | 2025-05-19 15:22:40.853022 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-19 15:22:40.862524 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 15:22:42.629641 | 2025-05-19 15:22:42.629816 | PLAY [Post output play] 2025-05-19 15:22:42.656163 | 2025-05-19 15:22:42.656312 | LOOP [stage-output : Register sources] 2025-05-19 15:22:42.710626 | 2025-05-19 15:22:42.710883 | TASK [stage-output : Check sudo] 2025-05-19 15:22:43.612983 | orchestrator | sudo: a password is required 2025-05-19 15:22:43.745951 | orchestrator | ok: Runtime: 0:00:00.013343 2025-05-19 15:22:43.762166 | 2025-05-19 15:22:43.762318 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-19 15:22:43.801982 | 2025-05-19 15:22:43.802263 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-19 15:22:43.882082 | orchestrator | ok 2025-05-19 15:22:43.891298 | 2025-05-19 15:22:43.891436 | LOOP [stage-output : Ensure target folders exist] 2025-05-19 15:22:44.343598 | orchestrator | ok: "docs" 2025-05-19 15:22:44.343919 | 2025-05-19 15:22:44.599957 | orchestrator | ok: "artifacts" 2025-05-19 15:22:44.832497 | orchestrator | ok: "logs" 2025-05-19 15:22:44.845164 | 2025-05-19 15:22:44.845316 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-19 15:22:44.881940 | 2025-05-19 15:22:44.882208 | TASK [stage-output : Make all log files readable] 2025-05-19 15:22:45.156936 | orchestrator | ok 2025-05-19 15:22:45.173837 | 2025-05-19 15:22:45.174096 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-19 15:22:45.234360 | orchestrator | skipping: Conditional result was False 2025-05-19 15:22:45.243988 | 2025-05-19 15:22:45.244113 | TASK [stage-output : Discover log files for compression] 2025-05-19 15:22:45.269000 | orchestrator | skipping: Conditional result was False 2025-05-19 15:22:45.281204 | 2025-05-19 15:22:45.281328 | LOOP [stage-output : Archive everything from logs] 2025-05-19 15:22:45.322436 | 2025-05-19 15:22:45.322646 | PLAY [Post cleanup play] 2025-05-19 15:22:45.331131 | 2025-05-19 15:22:45.331245 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 15:22:45.396844 | orchestrator | ok 2025-05-19 15:22:45.409123 | 2025-05-19 15:22:45.409260 | TASK [Set cloud fact (local deployment)] 2025-05-19 15:22:45.453902 | orchestrator | skipping: Conditional result was False 2025-05-19 15:22:45.471432 | 2025-05-19 15:22:45.471642 | TASK [Clean the cloud environment] 2025-05-19 15:22:46.051805 | orchestrator | 2025-05-19 15:22:46 - clean up servers 2025-05-19 15:22:46.805222 | orchestrator | 2025-05-19 15:22:46 - testbed-manager 2025-05-19 15:22:46.887306 | orchestrator | 2025-05-19 15:22:46 - testbed-node-5 2025-05-19 15:22:46.965466 | orchestrator | 2025-05-19 15:22:46 - testbed-node-1 2025-05-19 15:22:47.044779 | orchestrator | 2025-05-19 15:22:47 - testbed-node-4 2025-05-19 15:22:47.125598 | orchestrator | 2025-05-19 15:22:47 - testbed-node-2 2025-05-19 15:22:47.213738 | orchestrator | 2025-05-19 15:22:47 - testbed-node-0 2025-05-19 15:22:47.299377 | orchestrator | 2025-05-19 15:22:47 - testbed-node-3 2025-05-19 15:22:47.393734 | orchestrator | 2025-05-19 15:22:47 - clean up keypairs 2025-05-19 15:22:47.411931 | orchestrator | 2025-05-19 15:22:47 - testbed 2025-05-19 15:22:47.435754 | orchestrator | 2025-05-19 15:22:47 - wait for servers to be gone 2025-05-19 15:22:56.621646 | orchestrator | 2025-05-19 15:22:56 - clean up ports 2025-05-19 15:22:56.820477 | orchestrator | 2025-05-19 15:22:56 - 01872746-5a4f-4407-addb-23474dd33988 2025-05-19 15:22:57.343960 | orchestrator | 2025-05-19 15:22:57 - 1151c3b6-ab7a-493b-9f3c-e2718e43b3d7 2025-05-19 15:22:57.543565 | orchestrator | 2025-05-19 15:22:57 - 3052f314-d86b-455f-87e3-98ce9550dd0f 2025-05-19 15:22:57.786261 | orchestrator | 2025-05-19 15:22:57 - 3e07441b-1cc8-444b-89e0-062071b66fdc 2025-05-19 15:22:58.000671 | orchestrator | 2025-05-19 15:22:58 - 53d30ad2-59f0-4f69-9c13-b1f67c563a2c 2025-05-19 15:22:58.348264 | orchestrator | 2025-05-19 15:22:58 - ad9fa078-cdd6-4582-8298-17ad61d0b902 2025-05-19 15:22:58.561300 | orchestrator | 2025-05-19 15:22:58 - e744b9a2-6cfa-4166-aa98-4763ceabcfad 2025-05-19 15:22:58.776305 | orchestrator | 2025-05-19 15:22:58 - clean up volumes 2025-05-19 15:22:58.901459 | orchestrator | 2025-05-19 15:22:58 - testbed-volume-0-node-base 2025-05-19 15:22:58.941480 | orchestrator | 2025-05-19 15:22:58 - testbed-volume-1-node-base 2025-05-19 15:22:58.981945 | orchestrator | 2025-05-19 15:22:58 - testbed-volume-5-node-base 2025-05-19 15:22:59.024536 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-4-node-base 2025-05-19 15:22:59.065363 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-3-node-base 2025-05-19 15:22:59.112318 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-2-node-base 2025-05-19 15:22:59.152697 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-manager-base 2025-05-19 15:22:59.196271 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-1-node-4 2025-05-19 15:22:59.237941 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-2-node-5 2025-05-19 15:22:59.282763 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-3-node-3 2025-05-19 15:22:59.323597 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-0-node-3 2025-05-19 15:22:59.369317 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-5-node-5 2025-05-19 15:22:59.414994 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-8-node-5 2025-05-19 15:22:59.458423 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-6-node-3 2025-05-19 15:22:59.498889 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-7-node-4 2025-05-19 15:22:59.543277 | orchestrator | 2025-05-19 15:22:59 - testbed-volume-4-node-4 2025-05-19 15:22:59.589643 | orchestrator | 2025-05-19 15:22:59 - disconnect routers 2025-05-19 15:23:00.100174 | orchestrator | 2025-05-19 15:23:00 - testbed 2025-05-19 15:23:00.972377 | orchestrator | 2025-05-19 15:23:00 - clean up subnets 2025-05-19 15:23:01.028907 | orchestrator | 2025-05-19 15:23:01 - subnet-testbed-management 2025-05-19 15:23:01.205030 | orchestrator | 2025-05-19 15:23:01 - clean up networks 2025-05-19 15:23:01.346083 | orchestrator | 2025-05-19 15:23:01 - net-testbed-management 2025-05-19 15:23:01.623098 | orchestrator | 2025-05-19 15:23:01 - clean up security groups 2025-05-19 15:23:01.664497 | orchestrator | 2025-05-19 15:23:01 - testbed-management 2025-05-19 15:23:01.785016 | orchestrator | 2025-05-19 15:23:01 - testbed-node 2025-05-19 15:23:01.895382 | orchestrator | 2025-05-19 15:23:01 - clean up floating ips 2025-05-19 15:23:01.931332 | orchestrator | 2025-05-19 15:23:01 - 81.163.192.238 2025-05-19 15:23:02.667340 | orchestrator | 2025-05-19 15:23:02 - clean up routers 2025-05-19 15:23:02.764099 | orchestrator | 2025-05-19 15:23:02 - testbed 2025-05-19 15:23:04.029323 | orchestrator | ok: Runtime: 0:00:17.839333 2025-05-19 15:23:04.033849 | 2025-05-19 15:23:04.034015 | PLAY RECAP 2025-05-19 15:23:04.034145 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-19 15:23:04.034206 | 2025-05-19 15:23:04.172627 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 15:23:04.177103 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 15:23:04.945039 | 2025-05-19 15:23:04.945198 | PLAY [Cleanup play] 2025-05-19 15:23:04.961312 | 2025-05-19 15:23:04.961443 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 15:23:05.027750 | orchestrator | ok 2025-05-19 15:23:05.036608 | 2025-05-19 15:23:05.036754 | TASK [Set cloud fact (local deployment)] 2025-05-19 15:23:05.081517 | orchestrator | skipping: Conditional result was False 2025-05-19 15:23:05.097083 | 2025-05-19 15:23:05.097218 | TASK [Clean the cloud environment] 2025-05-19 15:23:06.276225 | orchestrator | 2025-05-19 15:23:06 - clean up servers 2025-05-19 15:23:06.751013 | orchestrator | 2025-05-19 15:23:06 - clean up keypairs 2025-05-19 15:23:06.765209 | orchestrator | 2025-05-19 15:23:06 - wait for servers to be gone 2025-05-19 15:23:06.802531 | orchestrator | 2025-05-19 15:23:06 - clean up ports 2025-05-19 15:23:06.869801 | orchestrator | 2025-05-19 15:23:06 - clean up volumes 2025-05-19 15:23:06.931235 | orchestrator | 2025-05-19 15:23:06 - disconnect routers 2025-05-19 15:23:06.952247 | orchestrator | 2025-05-19 15:23:06 - clean up subnets 2025-05-19 15:23:06.970380 | orchestrator | 2025-05-19 15:23:06 - clean up networks 2025-05-19 15:23:07.127567 | orchestrator | 2025-05-19 15:23:07 - clean up security groups 2025-05-19 15:23:07.162172 | orchestrator | 2025-05-19 15:23:07 - clean up floating ips 2025-05-19 15:23:07.187021 | orchestrator | 2025-05-19 15:23:07 - clean up routers 2025-05-19 15:23:07.639773 | orchestrator | ok: Runtime: 0:00:01.316479 2025-05-19 15:23:07.644732 | 2025-05-19 15:23:07.644976 | PLAY RECAP 2025-05-19 15:23:07.645167 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-19 15:23:07.645266 | 2025-05-19 15:23:07.787201 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 15:23:07.789620 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 15:23:08.549404 | 2025-05-19 15:23:08.549617 | PLAY [Base post-fetch] 2025-05-19 15:23:08.567131 | 2025-05-19 15:23:08.567277 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-19 15:23:08.622661 | orchestrator | skipping: Conditional result was False 2025-05-19 15:23:08.636508 | 2025-05-19 15:23:08.636758 | TASK [fetch-output : Set log path for single node] 2025-05-19 15:23:08.695596 | orchestrator | ok 2025-05-19 15:23:08.706255 | 2025-05-19 15:23:08.706414 | LOOP [fetch-output : Ensure local output dirs] 2025-05-19 15:23:09.206652 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/logs" 2025-05-19 15:23:09.487609 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/artifacts" 2025-05-19 15:23:09.788484 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/14da8de40697410c90def8b74f0720f7/work/docs" 2025-05-19 15:23:09.817973 | 2025-05-19 15:23:09.818159 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-19 15:23:10.786183 | orchestrator | changed: .d..t...... ./ 2025-05-19 15:23:10.786482 | orchestrator | changed: All items complete 2025-05-19 15:23:10.786525 | 2025-05-19 15:23:11.537306 | orchestrator | changed: .d..t...... ./ 2025-05-19 15:23:12.287172 | orchestrator | changed: .d..t...... ./ 2025-05-19 15:23:12.316428 | 2025-05-19 15:23:12.316612 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-19 15:23:12.356840 | orchestrator | skipping: Conditional result was False 2025-05-19 15:23:12.359492 | orchestrator | skipping: Conditional result was False 2025-05-19 15:23:12.377834 | 2025-05-19 15:23:12.377951 | PLAY RECAP 2025-05-19 15:23:12.378025 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-19 15:23:12.378064 | 2025-05-19 15:23:12.508831 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 15:23:12.512154 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 15:23:13.261728 | 2025-05-19 15:23:13.261902 | PLAY [Base post] 2025-05-19 15:23:13.276878 | 2025-05-19 15:23:13.277034 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-19 15:23:14.309262 | orchestrator | changed 2025-05-19 15:23:14.318914 | 2025-05-19 15:23:14.319055 | PLAY RECAP 2025-05-19 15:23:14.319131 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-19 15:23:14.319199 | 2025-05-19 15:23:14.437086 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 15:23:14.440372 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-19 15:23:15.270471 | 2025-05-19 15:23:15.270797 | PLAY [Base post-logs] 2025-05-19 15:23:15.287324 | 2025-05-19 15:23:15.287463 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-19 15:23:15.760763 | localhost | changed 2025-05-19 15:23:15.803747 | 2025-05-19 15:23:15.804234 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-19 15:23:15.846782 | localhost | ok 2025-05-19 15:23:15.853456 | 2025-05-19 15:23:15.853815 | TASK [Set zuul-log-path fact] 2025-05-19 15:23:15.871478 | localhost | ok 2025-05-19 15:23:15.885667 | 2025-05-19 15:23:15.885904 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 15:23:15.924471 | localhost | ok 2025-05-19 15:23:15.929725 | 2025-05-19 15:23:15.929865 | TASK [upload-logs : Create log directories] 2025-05-19 15:23:16.435836 | localhost | changed 2025-05-19 15:23:16.439945 | 2025-05-19 15:23:16.440078 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-19 15:23:16.944657 | localhost -> localhost | ok: Runtime: 0:00:00.008573 2025-05-19 15:23:16.957376 | 2025-05-19 15:23:16.957593 | TASK [upload-logs : Upload logs to log server] 2025-05-19 15:23:17.578440 | localhost | Output suppressed because no_log was given 2025-05-19 15:23:17.583477 | 2025-05-19 15:23:17.583671 | LOOP [upload-logs : Compress console log and json output] 2025-05-19 15:23:17.648359 | localhost | skipping: Conditional result was False 2025-05-19 15:23:17.657488 | localhost | skipping: Conditional result was False 2025-05-19 15:23:17.666637 | 2025-05-19 15:23:17.666996 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-19 15:23:17.723406 | localhost | skipping: Conditional result was False 2025-05-19 15:23:17.723725 | 2025-05-19 15:23:17.728277 | localhost | skipping: Conditional result was False 2025-05-19 15:23:17.740800 | 2025-05-19 15:23:17.741062 | LOOP [upload-logs : Upload console log and json output]